Feb 19 13:09:44 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 13:09:44 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:44 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 13:09:45 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 13:09:45 crc kubenswrapper[4861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 13:09:45 crc kubenswrapper[4861]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 13:09:45 crc kubenswrapper[4861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 13:09:45 crc kubenswrapper[4861]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 13:09:45 crc kubenswrapper[4861]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 13:09:45 crc kubenswrapper[4861]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.728075 4861 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734534 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734558 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734565 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734611 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734617 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734623 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734631 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734636 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734642 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734647 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734652 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734658 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734663 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734668 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734673 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734689 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734694 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734700 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734705 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734711 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734716 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734722 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734727 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734733 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734738 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734744 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734749 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734755 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734760 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734766 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734773 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734781 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734787 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734793 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734800 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734807 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734812 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734818 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734823 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734829 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734835 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734868 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734874 4861 feature_gate.go:330] unrecognized feature gate: Example Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734879 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734884 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734890 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734895 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734900 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734905 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734935 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734941 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734946 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734951 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734957 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734962 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734970 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734977 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734985 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734992 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.734999 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735007 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735014 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735019 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735026 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735032 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735037 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735093 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735101 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735107 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735112 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.735117 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.735990 4861 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736009 4861 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736022 4861 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736033 4861 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736053 4861 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736063 4861 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736074 4861 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736085 4861 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736093 4861 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736100 4861 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736106 4861 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736113 4861 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736119 4861 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736125 4861 flags.go:64] FLAG: --cgroup-root="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736131 4861 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736138 4861 flags.go:64] FLAG: --client-ca-file="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736144 4861 flags.go:64] FLAG: --cloud-config="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736150 4861 flags.go:64] FLAG: --cloud-provider="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736156 4861 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736166 4861 flags.go:64] FLAG: --cluster-domain="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736175 4861 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736181 4861 flags.go:64] FLAG: --config-dir="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736187 4861 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736194 4861 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736202 4861 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736208 4861 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736214 4861 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736220 4861 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736227 4861 flags.go:64] FLAG: --contention-profiling="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736233 4861 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736239 4861 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736245 4861 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736252 4861 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736260 4861 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736267 4861 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736272 4861 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736278 4861 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736284 4861 flags.go:64] FLAG: --enable-server="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736291 4861 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736299 4861 flags.go:64] FLAG: --event-burst="100" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736304 4861 flags.go:64] FLAG: --event-qps="50" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736311 4861 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736317 4861 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736323 4861 flags.go:64] FLAG: --eviction-hard="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736331 4861 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736337 4861 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736343 4861 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736349 4861 flags.go:64] FLAG: --eviction-soft="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736355 4861 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736361 4861 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736367 4861 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736373 4861 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736379 4861 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736385 4861 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736391 4861 flags.go:64] FLAG: --feature-gates="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736398 4861 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736404 4861 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736410 4861 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736437 4861 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736444 4861 flags.go:64] FLAG: --healthz-port="10248" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736451 4861 flags.go:64] FLAG: --help="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736457 4861 flags.go:64] FLAG: --hostname-override="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736464 4861 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736476 4861 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736490 4861 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736499 4861 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736506 4861 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736514 4861 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736525 4861 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736533 4861 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736541 4861 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736550 4861 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736559 4861 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736615 4861 flags.go:64] FLAG: --kube-reserved="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736625 4861 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736632 4861 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736639 4861 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736645 4861 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736652 4861 flags.go:64] FLAG: --lock-file="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736658 4861 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736665 4861 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736673 4861 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736683 4861 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736690 4861 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736696 4861 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736703 4861 flags.go:64] FLAG: --logging-format="text" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736709 4861 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736716 4861 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736722 4861 flags.go:64] FLAG: --manifest-url="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736728 4861 flags.go:64] FLAG: --manifest-url-header="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736737 4861 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736744 4861 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736752 4861 flags.go:64] FLAG: --max-pods="110" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736759 4861 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736765 4861 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736771 4861 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736778 4861 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736784 4861 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736790 4861 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736797 4861 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736811 4861 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736817 4861 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736824 4861 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736830 4861 flags.go:64] FLAG: --pod-cidr="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736837 4861 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736848 4861 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736855 4861 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736862 4861 flags.go:64] FLAG: --pods-per-core="0" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736868 4861 flags.go:64] FLAG: --port="10250" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736875 4861 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736881 4861 flags.go:64] FLAG: --provider-id="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736889 4861 flags.go:64] FLAG: --qos-reserved="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736895 4861 flags.go:64] FLAG: --read-only-port="10255" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736902 4861 flags.go:64] FLAG: --register-node="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736908 4861 flags.go:64] FLAG: --register-schedulable="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736914 4861 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736925 4861 flags.go:64] FLAG: --registry-burst="10" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736931 4861 flags.go:64] FLAG: --registry-qps="5" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736937 4861 flags.go:64] FLAG: --reserved-cpus="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736944 4861 flags.go:64] FLAG: --reserved-memory="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736952 4861 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736958 4861 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736965 4861 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736971 4861 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736977 4861 flags.go:64] FLAG: --runonce="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736983 4861 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736989 4861 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.736996 4861 flags.go:64] FLAG: --seccomp-default="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737003 4861 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737009 4861 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737016 4861 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737022 4861 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737030 4861 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737036 4861 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737043 4861 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737050 4861 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737087 4861 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737095 4861 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737102 4861 flags.go:64] FLAG: --system-cgroups="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737108 4861 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737119 4861 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737125 4861 flags.go:64] FLAG: --tls-cert-file="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737131 4861 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737139 4861 flags.go:64] FLAG: --tls-min-version="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737145 4861 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737151 4861 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737158 4861 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737165 4861 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737171 4861 flags.go:64] FLAG: --v="2" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737179 4861 flags.go:64] FLAG: --version="false" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737187 4861 flags.go:64] FLAG: --vmodule="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737195 4861 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737201 4861 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737358 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737393 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737399 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737406 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737412 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737442 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737448 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737477 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737483 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737488 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737494 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737499 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737505 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737510 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737515 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737543 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737548 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737554 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737559 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737565 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737592 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737597 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737602 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737611 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737617 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737624 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737631 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737640 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737650 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737658 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737666 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737674 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737682 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737689 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737697 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737704 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737712 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737722 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737732 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737741 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737749 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737756 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737764 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737771 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737777 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737782 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737788 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737793 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737799 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737805 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737811 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737816 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737822 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737828 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737833 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737838 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737844 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737850 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737855 4861 feature_gate.go:330] unrecognized feature gate: Example Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737862 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737867 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737911 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737917 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737923 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737928 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737934 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737941 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737947 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737953 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737959 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.737965 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.737983 4861 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.749519 4861 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.749563 4861 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749639 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749646 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749651 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749655 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749659 4861 feature_gate.go:330] unrecognized feature gate: Example Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749663 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749667 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749670 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749676 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749682 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749686 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749690 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749694 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749698 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749701 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749705 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749710 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749718 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749723 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749728 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749733 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749738 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749742 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749747 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749751 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749755 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749760 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749764 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749768 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749773 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749777 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749783 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749787 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749791 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749797 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749803 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749808 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749813 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749818 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749824 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749830 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749835 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749839 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749844 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749848 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749852 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749857 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749861 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749866 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749871 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749876 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749881 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749887 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749892 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749896 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749901 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749905 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749909 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749914 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749918 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749922 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749927 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749931 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749940 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749945 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749949 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749953 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749956 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749960 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749963 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.749968 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.749976 4861 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750101 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750109 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750116 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750120 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750124 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750128 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750132 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750136 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750139 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750143 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750146 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750150 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750153 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750157 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750160 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750164 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750167 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750172 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750176 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750180 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750185 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750188 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750192 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750196 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750201 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750205 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750209 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750213 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750217 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750221 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750226 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750232 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750236 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750240 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750245 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750249 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750252 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750256 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750259 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750263 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750267 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750270 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750273 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750277 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750280 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750284 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750287 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750291 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750294 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750298 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750302 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750306 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750310 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750313 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750317 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750321 4861 feature_gate.go:330] unrecognized feature gate: Example Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750326 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750330 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750334 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750338 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750341 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750345 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750349 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750353 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750356 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750360 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750364 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750369 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750374 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750378 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.750383 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.750390 4861 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.751286 4861 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.755708 4861 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.755813 4861 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.757945 4861 server.go:997] "Starting client certificate rotation" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.757968 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.758181 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-28 05:03:47.956345826 +0000 UTC Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.758356 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.786220 4861 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.787732 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.792793 4861 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.812041 4861 log.go:25] "Validated CRI v1 runtime API" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.847842 4861 log.go:25] "Validated CRI v1 image API" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.850188 4861 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.855276 4861 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-13-05-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.855328 4861 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.883036 4861 manager.go:217] Machine: {Timestamp:2026-02-19 13:09:45.879526326 +0000 UTC m=+0.540629624 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4c20662d-b7a0-4257-bc2e-597d65530c8e BootID:b471f118-2cdd-4d29-a235-2083985944a7 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:aa:c6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:aa:c6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d5:0f:37 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a8:11:53 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:66:c9:10 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:45:2d:7c Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:56:11:67 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:35:0a:a7:12:7e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:7d:03:fa:8c:f7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.883596 4861 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.883791 4861 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.888869 4861 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.889183 4861 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.889242 4861 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.889577 4861 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.889594 4861 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.890210 4861 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.890256 4861 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.890565 4861 state_mem.go:36] "Initialized new in-memory state store" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.890699 4861 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.893750 4861 kubelet.go:418] "Attempting to sync node with API server" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.893778 4861 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.893815 4861 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.893834 4861 kubelet.go:324] "Adding apiserver pod source" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.893851 4861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.901600 4861 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.902655 4861 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.904799 4861 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.905254 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.906110 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.906118 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.906186 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906804 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906839 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906851 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906860 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906895 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906906 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906915 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906929 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906940 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906949 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906962 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.906971 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.908659 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.909197 4861 server.go:1280] "Started kubelet" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.910121 4861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.910436 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.910591 4861 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.910663 4861 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 13:09:45 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.911976 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.912027 4861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.912276 4861 server.go:460] "Adding debug handlers to kubelet server" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.912284 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:09:41.953804148 +0000 UTC Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.912642 4861 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.912684 4861 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.912829 4861 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.913042 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.913920 4861 factory.go:55] Registering systemd factory Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.913957 4861 factory.go:221] Registration of the systemd container factory successfully Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.913911 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.913954 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.914061 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.914216 4861 factory.go:153] Registering CRI-O factory Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.914231 4861 factory.go:221] Registration of the crio container factory successfully Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.914291 4861 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.914313 4861 factory.go:103] Registering Raw factory Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.914328 4861 manager.go:1196] Started watching for new ooms in manager Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.916088 4861 manager.go:319] Starting recovery of all containers Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.915342 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895a7daa2616f09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 13:09:45.909161737 +0000 UTC m=+0.570264975,LastTimestamp:2026-02-19 13:09:45.909161737 +0000 UTC m=+0.570264975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.940829 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.941467 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.941567 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.941790 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942398 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942469 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942487 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942504 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942526 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942539 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942553 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942567 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.942585 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943052 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943079 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943098 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943136 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943149 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943163 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943176 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943189 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943205 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943222 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943235 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943248 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943264 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943281 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943295 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943309 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943322 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943339 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943357 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943370 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943384 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943397 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943410 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943436 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943452 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943466 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943479 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943493 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943507 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943520 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943533 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943546 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943566 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943580 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943595 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943608 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943622 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943637 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943651 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943670 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943686 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943702 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943769 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943784 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943803 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943817 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943830 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943845 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943857 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943872 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943887 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943899 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943911 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.943923 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944015 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944028 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944042 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944057 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944094 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944107 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944120 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944133 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944147 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944160 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944173 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944186 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944199 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944230 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944244 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944256 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944270 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944303 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.944316 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946380 4861 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946459 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946490 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946508 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946524 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946537 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946553 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946572 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946592 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946533 4861 manager.go:324] Recovery completed Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946626 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946683 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946703 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946729 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946746 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946762 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946777 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946804 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946832 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946849 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946874 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946894 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946917 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946934 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946947 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946961 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946981 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.946999 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947014 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947030 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947046 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947058 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947070 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947089 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947127 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947138 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947149 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947160 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947173 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947185 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947196 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947210 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947224 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947236 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947247 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947258 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947269 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947281 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947292 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947305 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947317 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947328 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947339 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947349 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947360 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947373 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947384 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947395 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947406 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947581 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947596 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947608 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947631 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947643 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947654 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947667 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947678 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947692 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947704 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947717 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947729 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947743 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947767 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947779 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947795 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947809 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947824 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947835 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947848 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947864 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947877 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947888 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947900 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947913 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947925 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947936 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947948 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947959 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947971 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.947985 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948003 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948019 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948035 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948048 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948059 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948070 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948081 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948091 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948102 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948112 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948122 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948133 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948146 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948159 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948172 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948184 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948199 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948215 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948230 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948244 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948260 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948274 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948288 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948303 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948316 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948331 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948343 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948356 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948370 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948384 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948397 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948412 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948460 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948473 4861 reconstruct.go:97] "Volume reconstruction finished" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.948482 4861 reconciler.go:26] "Reconciler: start to sync state" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.960916 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.963265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.963313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.963325 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.964987 4861 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.965021 4861 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.965048 4861 state_mem.go:36] "Initialized new in-memory state store" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.973395 4861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.975710 4861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.975754 4861 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 13:09:45 crc kubenswrapper[4861]: I0219 13:09:45.975782 4861 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.975831 4861 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 13:09:45 crc kubenswrapper[4861]: W0219 13:09:45.976512 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:45 crc kubenswrapper[4861]: E0219 13:09:45.976568 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.004022 4861 policy_none.go:49] "None policy: Start" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.005688 4861 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.006085 4861 state_mem.go:35] "Initializing new in-memory state store" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.013242 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.076508 4861 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.086959 4861 manager.go:334] "Starting Device Plugin manager" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087032 4861 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087047 4861 server.go:79] "Starting device plugin registration server" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087654 4861 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087677 4861 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087802 4861 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087932 4861 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.087954 4861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.094893 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.114859 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.188995 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.191166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.191248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.191265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.191305 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.192170 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.276687 4861 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.276895 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.279157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.279263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.279283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.279603 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.280139 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.280293 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.281532 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.281610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.281628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.281877 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.282120 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.282180 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283277 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283378 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283409 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283380 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.283482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.284038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.284160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.284177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.285541 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.285623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.285647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.285993 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.287129 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.287261 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.288657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.288732 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.288755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.289217 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.289488 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.291135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.291281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.291304 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.291276 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.291429 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.291444 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.293006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.293064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.293086 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354037 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354099 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354139 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354158 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354177 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354192 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354336 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354405 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354495 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354578 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354612 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.354688 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.392711 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.393913 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.393942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.393951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.393975 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.394372 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456337 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456407 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456442 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456458 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456477 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456741 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456800 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456882 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456957 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456743 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456771 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457100 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.456815 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457140 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457196 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457341 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457367 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457448 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457481 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457508 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457518 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457516 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457587 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.457686 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.516487 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.627072 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.660069 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: W0219 13:09:46.672743 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b695ad695a832d116fa8f9cbd9cb61e011b667e16a231eb1749318d9beab05fd WatchSource:0}: Error finding container b695ad695a832d116fa8f9cbd9cb61e011b667e16a231eb1749318d9beab05fd: Status 404 returned error can't find the container with id b695ad695a832d116fa8f9cbd9cb61e011b667e16a231eb1749318d9beab05fd Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.679567 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.688843 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.695167 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:46 crc kubenswrapper[4861]: W0219 13:09:46.705273 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9c64e4217bcd2704a5aa60a038d42ee64ad2c1b7d6931a8353f28d31f649c0e1 WatchSource:0}: Error finding container 9c64e4217bcd2704a5aa60a038d42ee64ad2c1b7d6931a8353f28d31f649c0e1: Status 404 returned error can't find the container with id 9c64e4217bcd2704a5aa60a038d42ee64ad2c1b7d6931a8353f28d31f649c0e1 Feb 19 13:09:46 crc kubenswrapper[4861]: W0219 13:09:46.707490 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-fb44c99a7091d96dac5fc8a25f2c1ffcfcab375f7557e42c0a2fb76d893cc66d WatchSource:0}: Error finding container fb44c99a7091d96dac5fc8a25f2c1ffcfcab375f7557e42c0a2fb76d893cc66d: Status 404 returned error can't find the container with id fb44c99a7091d96dac5fc8a25f2c1ffcfcab375f7557e42c0a2fb76d893cc66d Feb 19 13:09:46 crc kubenswrapper[4861]: W0219 13:09:46.718633 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1ebb20800b514d0da0741aa7e3e71f76f5e65b4e58159ee2c23dba79d37c657a WatchSource:0}: Error finding container 1ebb20800b514d0da0741aa7e3e71f76f5e65b4e58159ee2c23dba79d37c657a: Status 404 returned error can't find the container with id 1ebb20800b514d0da0741aa7e3e71f76f5e65b4e58159ee2c23dba79d37c657a Feb 19 13:09:46 crc kubenswrapper[4861]: W0219 13:09:46.721103 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-978cdae27bf982f4c1dd5f573aa7a5cf8dbc5488525bf63089b29baa35395a93 WatchSource:0}: Error finding container 978cdae27bf982f4c1dd5f573aa7a5cf8dbc5488525bf63089b29baa35395a93: Status 404 returned error can't find the container with id 978cdae27bf982f4c1dd5f573aa7a5cf8dbc5488525bf63089b29baa35395a93 Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.795373 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.796614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.796648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.796657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.796680 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.797127 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 19 13:09:46 crc kubenswrapper[4861]: W0219 13:09:46.899938 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:46 crc kubenswrapper[4861]: E0219 13:09:46.900073 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.911727 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.912760 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:43:08.263055875 +0000 UTC Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.980223 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b695ad695a832d116fa8f9cbd9cb61e011b667e16a231eb1749318d9beab05fd"} Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.982772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1ebb20800b514d0da0741aa7e3e71f76f5e65b4e58159ee2c23dba79d37c657a"} Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.983857 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"978cdae27bf982f4c1dd5f573aa7a5cf8dbc5488525bf63089b29baa35395a93"} Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.985093 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb44c99a7091d96dac5fc8a25f2c1ffcfcab375f7557e42c0a2fb76d893cc66d"} Feb 19 13:09:46 crc kubenswrapper[4861]: I0219 13:09:46.986455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c64e4217bcd2704a5aa60a038d42ee64ad2c1b7d6931a8353f28d31f649c0e1"} Feb 19 13:09:47 crc kubenswrapper[4861]: W0219 13:09:47.025761 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:47 crc kubenswrapper[4861]: E0219 13:09:47.025893 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:47 crc kubenswrapper[4861]: W0219 13:09:47.301980 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:47 crc kubenswrapper[4861]: E0219 13:09:47.302178 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:47 crc kubenswrapper[4861]: E0219 13:09:47.317523 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Feb 19 13:09:47 crc kubenswrapper[4861]: W0219 13:09:47.499703 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:47 crc kubenswrapper[4861]: E0219 13:09:47.499860 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.598033 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.599513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.599580 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.599651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.599691 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:09:47 crc kubenswrapper[4861]: E0219 13:09:47.600367 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.819250 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 13:09:47 crc kubenswrapper[4861]: E0219 13:09:47.820446 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.911547 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.913663 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:17:48.769281989 +0000 UTC Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.991318 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad"} Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.991370 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469"} Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.991388 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e"} Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.992670 4861 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537" exitCode=0 Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.992728 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537"} Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.992795 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.993769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.993810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.993825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.995518 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd" exitCode=0 Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.995582 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd"} Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.995638 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.997054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.997078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.997089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.998049 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38" exitCode=0 Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.998085 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38"} Feb 19 13:09:47 crc kubenswrapper[4861]: I0219 13:09:47.998160 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:47.999981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.000024 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.000037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.001060 4861 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d" exitCode=0 Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.001098 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d"} Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.001198 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.002134 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.002695 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.002730 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.002745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.003371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.003451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.003470 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.911498 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:48 crc kubenswrapper[4861]: I0219 13:09:48.913855 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:11:46.295492141 +0000 UTC Feb 19 13:09:48 crc kubenswrapper[4861]: E0219 13:09:48.918045 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.004484 4861 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966" exitCode=0 Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.004544 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.004654 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.005663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.005690 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.005702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.009533 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.009610 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.010976 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.011013 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.011026 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.012093 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.012118 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.020787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.020819 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.024186 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921"} Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.024314 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.025678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.025711 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.025722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.200782 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.202195 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.202300 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.202330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.202386 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:09:49 crc kubenswrapper[4861]: E0219 13:09:49.203090 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 19 13:09:49 crc kubenswrapper[4861]: W0219 13:09:49.610156 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:49 crc kubenswrapper[4861]: E0219 13:09:49.610289 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:49 crc kubenswrapper[4861]: W0219 13:09:49.856808 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:49 crc kubenswrapper[4861]: E0219 13:09:49.856914 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:49 crc kubenswrapper[4861]: W0219 13:09:49.884087 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:49 crc kubenswrapper[4861]: E0219 13:09:49.884169 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.912088 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:49 crc kubenswrapper[4861]: I0219 13:09:49.914188 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:28:44.154332035 +0000 UTC Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.033553 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fb5daf271ff602abd1cff073e2279efac8ae9363bf9c54a96855b68987a413ba"} Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.033647 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f"} Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.033670 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b"} Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.033753 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.035765 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.035843 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.035868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.040071 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92"} Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.040177 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.041966 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.042038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.042058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.043723 4861 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10" exitCode=0 Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.043841 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10"} Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.043861 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.043944 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.044020 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.045402 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.045489 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.045514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.045919 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.045961 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.045980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.046548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.046605 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.046628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:50 crc kubenswrapper[4861]: W0219 13:09:50.341478 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:50 crc kubenswrapper[4861]: E0219 13:09:50.341568 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.911140 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:50 crc kubenswrapper[4861]: I0219 13:09:50.914300 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:07:01.428147951 +0000 UTC Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.046655 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.048098 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fb5daf271ff602abd1cff073e2279efac8ae9363bf9c54a96855b68987a413ba" exitCode=255 Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.048188 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fb5daf271ff602abd1cff073e2279efac8ae9363bf9c54a96855b68987a413ba"} Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.048204 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.049356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.049390 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.049398 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.051547 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d781bfecbeb39261bba4fc9d1b03f8f1cbed369f7853aaff900ef7ca37a00b99"} Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.051640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5abadbe7456206a92ef9f0ab4c940622079e3d05f062cfeebea199db607be49a"} Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.051655 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.051697 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.052984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.053013 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.053027 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.059719 4861 scope.go:117] "RemoveContainer" containerID="fb5daf271ff602abd1cff073e2279efac8ae9363bf9c54a96855b68987a413ba" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.296675 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.911814 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 19 13:09:51 crc kubenswrapper[4861]: I0219 13:09:51.915330 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:30:03.067277301 +0000 UTC Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.048156 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.056199 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.058068 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702"} Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.058110 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.058235 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.059530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.059575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.059588 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.062875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e7a48180980ed67d1755a75c9dbac5b58faa20142beff57ef5e9a53881606129"} Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.062927 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.062945 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.062946 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9ec651c7187f3afedf843d7538ba88e9fc4aa1619fe11e3a4e61cc570c91443"} Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.063037 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"badfa69a0b27c1fd766d08c5c18d316f1adadb6f12c5f2a80915ab8b37e46cc2"} Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.064023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.064035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.064052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.064063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.064065 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.064082 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.180113 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.403717 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.404995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.405028 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.405038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.405058 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.500486 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.500678 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.501755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.501790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.501805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.508987 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:52 crc kubenswrapper[4861]: I0219 13:09:52.915683 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:20:40.129307133 +0000 UTC Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.065619 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.065681 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066458 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066528 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066555 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066655 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066978 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.066990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.070195 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.070268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.070311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:53 crc kubenswrapper[4861]: I0219 13:09:53.916391 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:22:15.549311146 +0000 UTC Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.072956 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.074135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.074179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.074188 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.589394 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.589595 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.589639 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.591074 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.591117 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.591142 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.656240 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.656557 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.658093 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.658146 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.658165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:54 crc kubenswrapper[4861]: I0219 13:09:54.916800 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:31:35.599727901 +0000 UTC Feb 19 13:09:55 crc kubenswrapper[4861]: I0219 13:09:55.917365 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:57:20.475682201 +0000 UTC Feb 19 13:09:56 crc kubenswrapper[4861]: E0219 13:09:56.095021 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 13:09:56 crc kubenswrapper[4861]: I0219 13:09:56.098509 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:56 crc kubenswrapper[4861]: I0219 13:09:56.098721 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:56 crc kubenswrapper[4861]: I0219 13:09:56.100381 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:56 crc kubenswrapper[4861]: I0219 13:09:56.100482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:56 crc kubenswrapper[4861]: I0219 13:09:56.100504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:56 crc kubenswrapper[4861]: I0219 13:09:56.918058 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:21:08.929402934 +0000 UTC Feb 19 13:09:57 crc kubenswrapper[4861]: I0219 13:09:57.590408 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 13:09:57 crc kubenswrapper[4861]: I0219 13:09:57.590541 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:09:57 crc kubenswrapper[4861]: I0219 13:09:57.918783 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:28:23.998684958 +0000 UTC Feb 19 13:09:58 crc kubenswrapper[4861]: I0219 13:09:58.540207 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:58 crc kubenswrapper[4861]: I0219 13:09:58.540400 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:58 crc kubenswrapper[4861]: I0219 13:09:58.541520 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:58 crc kubenswrapper[4861]: I0219 13:09:58.541560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:58 crc kubenswrapper[4861]: I0219 13:09:58.541573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:09:58 crc kubenswrapper[4861]: I0219 13:09:58.545310 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:09:59 crc kubenswrapper[4861]: I0219 13:09:59.023613 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:15:59.821807191 +0000 UTC Feb 19 13:09:59 crc kubenswrapper[4861]: I0219 13:09:59.087044 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:09:59 crc kubenswrapper[4861]: I0219 13:09:59.088133 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:09:59 crc kubenswrapper[4861]: I0219 13:09:59.088174 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:09:59 crc kubenswrapper[4861]: I0219 13:09:59.088185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.060490 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.061073 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:01:11.03876226 +0000 UTC Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.061633 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.063161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.063232 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.063244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.168048 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.168189 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.169159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.169183 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.169195 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:00 crc kubenswrapper[4861]: I0219 13:10:00.179169 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 13:10:01 crc kubenswrapper[4861]: I0219 13:10:01.061607 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:42:35.750729849 +0000 UTC Feb 19 13:10:01 crc kubenswrapper[4861]: I0219 13:10:01.091006 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:01 crc kubenswrapper[4861]: I0219 13:10:01.092011 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:01 crc kubenswrapper[4861]: I0219 13:10:01.092042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:01 crc kubenswrapper[4861]: I0219 13:10:01.092052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:02 crc kubenswrapper[4861]: E0219 13:10:02.049938 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 13:10:02 crc kubenswrapper[4861]: I0219 13:10:02.062770 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:04:14.678899837 +0000 UTC Feb 19 13:10:02 crc kubenswrapper[4861]: E0219 13:10:02.119962 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 19 13:10:02 crc kubenswrapper[4861]: I0219 13:10:02.180340 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 13:10:02 crc kubenswrapper[4861]: I0219 13:10:02.180476 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 13:10:02 crc kubenswrapper[4861]: E0219 13:10:02.406251 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 19 13:10:02 crc kubenswrapper[4861]: I0219 13:10:02.912570 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.063589 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:04:34.435352393 +0000 UTC Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.096868 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.097550 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.098983 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" exitCode=255 Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.099044 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702"} Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.099138 4861 scope.go:117] "RemoveContainer" containerID="fb5daf271ff602abd1cff073e2279efac8ae9363bf9c54a96855b68987a413ba" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.099239 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.099964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.099994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.100005 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.100509 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:10:03 crc kubenswrapper[4861]: E0219 13:10:03.100716 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.242103 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 13:10:03 crc kubenswrapper[4861]: I0219 13:10:03.242452 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 13:10:04 crc kubenswrapper[4861]: I0219 13:10:04.064462 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:52:48.910637792 +0000 UTC Feb 19 13:10:04 crc kubenswrapper[4861]: I0219 13:10:04.101926 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 13:10:05 crc kubenswrapper[4861]: I0219 13:10:05.065533 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:42:13.830301528 +0000 UTC Feb 19 13:10:06 crc kubenswrapper[4861]: I0219 13:10:06.066509 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:13:13.655000473 +0000 UTC Feb 19 13:10:06 crc kubenswrapper[4861]: E0219 13:10:06.095359 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.066822 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:56:27.164888287 +0000 UTC Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.185889 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.186177 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.188686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.188760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.188778 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.189697 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:10:07 crc kubenswrapper[4861]: E0219 13:10:07.190020 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.190665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.590135 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 13:10:07 crc kubenswrapper[4861]: I0219 13:10:07.590220 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.067742 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:24:22.959144369 +0000 UTC Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.116480 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.117519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.117550 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.117568 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.117989 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.118139 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.232388 4861 trace.go:236] Trace[2014610540]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 13:09:54.178) (total time: 14053ms): Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[2014610540]: ---"Objects listed" error: 14053ms (13:10:08.232) Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[2014610540]: [14.053998331s] [14.053998331s] END Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.232465 4861 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.235032 4861 trace.go:236] Trace[82223690]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 13:09:54.978) (total time: 13256ms): Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[82223690]: ---"Objects listed" error: 13256ms (13:10:08.234) Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[82223690]: [13.256659019s] [13.256659019s] END Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.235080 4861 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.235353 4861 trace.go:236] Trace[188822830]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 13:09:54.681) (total time: 13553ms): Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[188822830]: ---"Objects listed" error: 13553ms (13:10:08.234) Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[188822830]: [13.553423226s] [13.553423226s] END Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.235365 4861 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.235717 4861 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.238817 4861 trace.go:236] Trace[470720273]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 13:09:55.074) (total time: 13164ms): Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[470720273]: ---"Objects listed" error: 13164ms (13:10:08.238) Feb 19 13:10:08 crc kubenswrapper[4861]: Trace[470720273]: [13.164405158s] [13.164405158s] END Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.238855 4861 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.806946 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.808820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.808888 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.808904 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.809089 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.821772 4861 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.822083 4861 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.823350 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.823384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.823394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.823437 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.823451 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:08Z","lastTransitionTime":"2026-02-19T13:10:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.845192 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.850631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.850676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.850687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.850709 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.850723 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:08Z","lastTransitionTime":"2026-02-19T13:10:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.877106 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.882122 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.882188 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.882206 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.882235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.882255 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:08Z","lastTransitionTime":"2026-02-19T13:10:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.903933 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.914255 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.914311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.914330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.914361 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.914379 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:08Z","lastTransitionTime":"2026-02-19T13:10:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.926646 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.932663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.932713 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.932730 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.932755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.932773 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:08Z","lastTransitionTime":"2026-02-19T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.949046 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:08 crc kubenswrapper[4861]: E0219 13:10:08.949401 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.951746 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.951797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.951814 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.951842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:08 crc kubenswrapper[4861]: I0219 13:10:08.951860 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:08Z","lastTransitionTime":"2026-02-19T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.035728 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.055280 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.055319 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.055327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.055346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.055356 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.067447 4861 apiserver.go:52] "Watching apiserver" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.068596 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:44:28.790469241 +0000 UTC Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.070396 4861 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.070746 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.071274 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.071333 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.071380 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.071491 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.071615 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.071180 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.071819 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.071861 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.071862 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.073632 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074039 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074541 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074548 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074660 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074801 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074952 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.075056 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.074973 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.112285 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.113674 4861 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.127226 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.137207 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.137214 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.137533 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.138440 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142513 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142569 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142590 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142651 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142674 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142693 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142709 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142730 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142748 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142784 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142805 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142823 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142842 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142884 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142908 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142928 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142946 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142963 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.142983 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143000 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143015 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143066 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143084 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143225 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143247 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143268 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143286 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143305 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143323 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143345 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143365 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143383 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143402 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143440 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143459 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143478 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143498 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143517 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143537 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143561 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143578 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143598 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143617 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143657 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143676 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143697 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143714 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143732 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143751 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143769 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143786 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143807 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143848 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143867 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143887 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143905 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143923 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143959 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143977 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143998 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144016 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144037 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144056 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144072 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144090 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144108 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144146 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144163 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144180 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144200 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144218 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144237 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144255 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144274 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144292 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144330 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144350 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144368 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144392 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144410 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144447 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144464 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144482 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144499 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144515 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144530 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144546 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144562 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144579 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144595 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144631 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144649 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144666 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144699 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144714 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144730 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144762 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144781 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144801 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144819 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144837 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144856 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144873 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144890 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144907 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144925 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144942 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144961 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144978 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144997 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145019 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145037 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145053 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145071 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145090 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145187 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145204 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145222 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145237 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145252 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145286 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145303 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145321 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145336 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145357 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145374 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145392 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145407 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145441 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145462 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145479 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145513 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145531 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145553 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145569 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145586 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145603 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145622 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145646 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145773 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145805 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145830 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145853 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145880 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145898 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145917 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145933 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145950 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145967 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145984 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146000 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146019 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147085 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147217 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147269 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147295 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147322 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147402 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147818 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147851 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147936 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147970 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147997 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148135 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148164 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148199 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148224 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148257 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148285 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148309 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148333 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148360 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148383 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148412 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148461 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148488 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148507 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148527 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148546 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148571 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148596 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148683 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148706 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148729 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148762 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148790 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149102 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149135 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149165 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149441 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149511 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149547 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.156108 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143849 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143952 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.143954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144025 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144083 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144086 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144181 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144213 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144355 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144405 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144475 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144751 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144756 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.144984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145004 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145568 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145641 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145950 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.145984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146225 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146387 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146411 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146404 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146682 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146767 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.146999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147023 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147209 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147769 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.147912 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148085 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148115 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148172 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148134 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148206 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148498 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148657 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148660 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.148805 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.162709 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149299 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149361 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149581 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.149943 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.150193 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.150721 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.151392 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.151415 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.151441 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.151465 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.151922 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.151970 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.152232 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.152279 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.152332 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.152337 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.153063 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.153214 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.153493 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:10:09.65346872 +0000 UTC m=+24.314571948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.153541 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.154344 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.154574 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.154839 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.154937 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.155268 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.155305 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.155377 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.155518 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.155784 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.156120 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.156170 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.156259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.156437 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.156509 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.157999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158029 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158028 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158100 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158110 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158132 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158438 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158847 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158846 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.158947 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159034 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159381 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159409 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.159656 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.160015 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.160135 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.160222 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.160442 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.160530 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.160912 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.161032 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.161123 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.161152 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.161415 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.161975 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.161996 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.162583 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.162857 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.163563 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.163756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.163782 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.163791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.163805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.163814 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.164475 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.164955 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165097 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165205 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165454 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165719 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165839 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165918 4861 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.165966 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.166020 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.166179 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.166566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.166588 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.166561 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167000 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167145 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167361 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167715 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167792 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167835 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.168001 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.167997 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.168076 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.168396 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.168535 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.169030 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.169055 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.169133 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:09.669106092 +0000 UTC m=+24.330209400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.169244 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.169616 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.169669 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.169787 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.169942 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.168848 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.168932 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.169248 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.171012 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.171520 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.171868 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.171957 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.172396 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.172088 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.172471 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:09.672450813 +0000 UTC m=+24.333554041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.172834 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:09.672809243 +0000 UTC m=+24.333912581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.180876 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.180902 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.180914 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.180946 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:09.680936049 +0000 UTC m=+24.342039277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.183784 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.184684 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.184877 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.184920 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.185060 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.185647 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.185900 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.186435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.190159 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.190521 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.192906 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.193527 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.193845 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.194777 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.195395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.195617 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.196527 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.196714 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.197070 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.197110 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.197179 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.197252 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.197826 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.198002 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.199113 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.201736 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.201775 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.203579 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.206233 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.206383 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.206482 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.206571 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.207545 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.207593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.207649 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.207886 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208565 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208667 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208888 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208892 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.208939 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.209009 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.209096 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.210181 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.210928 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.210938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.211014 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.211114 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.211162 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.212191 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.212215 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.212488 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.225574 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.227824 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.234838 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.235185 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.241993 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250710 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250763 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250777 4861 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250791 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250807 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250821 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250834 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250845 4861 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250855 4861 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250865 4861 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250874 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250886 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250897 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250905 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250914 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250923 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250932 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250940 4861 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250951 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250961 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250971 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250926 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250990 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.250982 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251077 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251095 4861 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251108 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251134 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251145 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251156 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251167 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251178 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251189 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251216 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251226 4861 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251235 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251246 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251258 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251268 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251292 4861 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251302 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251311 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251321 4861 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251331 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251341 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251350 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251372 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251383 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251395 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251404 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251414 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251449 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251459 4861 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251486 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251496 4861 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251521 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251532 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251542 4861 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251551 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251561 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251572 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251598 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251607 4861 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251616 4861 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251626 4861 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251637 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251649 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251678 4861 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251688 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251697 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251707 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251719 4861 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251729 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251753 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251761 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251770 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251779 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251789 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251798 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251807 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251837 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251870 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251880 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251889 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251915 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251926 4861 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251935 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251944 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251952 4861 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251961 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251970 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.251995 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252004 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252013 4861 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252021 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252031 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252039 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252048 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252072 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252081 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252091 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252101 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252443 4861 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252455 4861 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252463 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252472 4861 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252481 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252494 4861 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252529 4861 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252544 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252553 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252561 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252571 4861 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252579 4861 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252634 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252683 4861 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252731 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252743 4861 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252752 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252761 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252770 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252804 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252818 4861 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252832 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252845 4861 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252857 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252884 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252894 4861 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252905 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252916 4861 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252928 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252939 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252971 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252982 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.252991 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253004 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253019 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253052 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253063 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253075 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253086 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253098 4861 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253128 4861 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253141 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253152 4861 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253164 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253175 4861 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253204 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253217 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253230 4861 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253243 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253255 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253286 4861 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253299 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253311 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253325 4861 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253338 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253364 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253377 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253388 4861 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253399 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253411 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253451 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253463 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253475 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253488 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253520 4861 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253534 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253546 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253559 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253571 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253603 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253616 4861 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253628 4861 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253647 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253679 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253692 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253705 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253717 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253728 4861 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253758 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253769 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253781 4861 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253792 4861 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253804 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253815 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253845 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253857 4861 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253869 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253881 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253892 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253903 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253913 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253942 4861 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253953 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253963 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.253975 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.269019 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.269064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.269076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.269095 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.269110 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.371831 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.371907 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.371927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.371958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.371977 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.388351 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.395048 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.404533 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 13:10:09 crc kubenswrapper[4861]: W0219 13:10:09.410773 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a9eff9b70f740a2efe703558a0af1743dc51e72f7ca40a3e38d882a7739b90bc WatchSource:0}: Error finding container a9eff9b70f740a2efe703558a0af1743dc51e72f7ca40a3e38d882a7739b90bc: Status 404 returned error can't find the container with id a9eff9b70f740a2efe703558a0af1743dc51e72f7ca40a3e38d882a7739b90bc Feb 19 13:10:09 crc kubenswrapper[4861]: W0219 13:10:09.411555 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-72c1429f9ae7e7445c1db13359a6ec709c3dac450978e7741c5006df6f91f5f3 WatchSource:0}: Error finding container 72c1429f9ae7e7445c1db13359a6ec709c3dac450978e7741c5006df6f91f5f3: Status 404 returned error can't find the container with id 72c1429f9ae7e7445c1db13359a6ec709c3dac450978e7741c5006df6f91f5f3 Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.487727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.488175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.488189 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.488212 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.488228 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.590993 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.591044 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.591056 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.591078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.591094 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.658227 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.658380 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:10:10.65835678 +0000 UTC m=+25.319460018 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.693596 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.693821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.693902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.693995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.694068 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.759903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.759972 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.760010 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.760055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760217 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760243 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760263 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760335 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:10.76031139 +0000 UTC m=+25.421414658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760470 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760488 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760503 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760545 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:10.760532176 +0000 UTC m=+25.421635444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760602 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760638 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:10.760627039 +0000 UTC m=+25.421730297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760708 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: E0219 13:10:09.760745 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:10.760734532 +0000 UTC m=+25.421837790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.797147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.797195 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.797209 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.797230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.797247 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.899676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.899727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.899756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.899778 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.899788 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:09Z","lastTransitionTime":"2026-02-19T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.980226 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.980981 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.981924 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.982605 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.983230 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.983841 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.985769 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.986724 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.987759 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.988555 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.989319 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.990399 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.991197 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.991987 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.992768 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.993567 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.997214 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.997789 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 13:10:09 crc kubenswrapper[4861]: I0219 13:10:09.998775 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.000168 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.000795 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.001576 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.002869 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.002898 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.002910 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.002962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.002975 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.003031 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.004190 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.005704 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.006575 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.008173 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.008858 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.010389 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.011321 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.012110 4861 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.012255 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.015864 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.016845 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.018175 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.020217 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.020922 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.022086 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.022835 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.023960 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.024461 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.025271 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.025972 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.026580 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.027122 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.027711 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.028246 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.028981 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.029451 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.029882 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.030318 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.030855 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.031444 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.031891 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.069466 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:55:24.766562659 +0000 UTC Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.105133 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.105168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.105177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.105193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.105203 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.124103 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.124153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.124162 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9eff9b70f740a2efe703558a0af1743dc51e72f7ca40a3e38d882a7739b90bc"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.125116 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a26881d17ee052d3e2b94384043dabd898eae2d4e9deab5093ca961fd74feba6"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.126403 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.126455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"72c1429f9ae7e7445c1db13359a6ec709c3dac450978e7741c5006df6f91f5f3"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.126872 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.127029 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.139370 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.159402 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.176256 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.190331 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.204540 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.208486 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.208657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.208680 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.208726 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.208741 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.220078 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.233074 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.246668 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.260469 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.277933 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.294540 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.307920 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.311847 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.311885 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.311906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.311930 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.311948 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.319620 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.334075 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.416342 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.416539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.416633 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.416715 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.416802 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.520251 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.520355 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.520380 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.520459 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.520487 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.623571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.623822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.623893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.623958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.624013 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.669393 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.669620 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:10:12.669593765 +0000 UTC m=+27.330696993 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.725695 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.725734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.725745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.725760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.725770 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.770920 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.770995 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.771030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.771078 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771172 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771237 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771252 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771258 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771314 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:12.771291797 +0000 UTC m=+27.432395025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771332 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:12.771325228 +0000 UTC m=+27.432428456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771325 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771397 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771446 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771465 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771516 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:12.771478163 +0000 UTC m=+27.432581431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.771552 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:12.771537374 +0000 UTC m=+27.432640632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.828518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.828584 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.828599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.828629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.828646 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.833876 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.849513 4861 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.931224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.931289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.931297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.931313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.931323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:10Z","lastTransitionTime":"2026-02-19T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.976677 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.976803 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.976913 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:10 crc kubenswrapper[4861]: I0219 13:10:10.976816 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.977011 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:10 crc kubenswrapper[4861]: E0219 13:10:10.977127 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.034309 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.034353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.034368 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.034387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.034400 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.069640 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:14:02.078355805 +0000 UTC Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.136977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.137025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.137034 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.137049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.137062 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.239720 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.239801 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.239821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.239852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.239872 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.343191 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.343273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.343303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.343337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.343367 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.446321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.446362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.446375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.446393 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.446406 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.549738 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.549796 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.549809 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.549830 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.549843 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.652315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.652348 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.652357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.652372 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.652380 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.754995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.755049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.755059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.755081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.755091 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.858564 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.858837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.858902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.858964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.859023 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.962500 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.962557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.962590 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.963448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:11 crc kubenswrapper[4861]: I0219 13:10:11.963500 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:11Z","lastTransitionTime":"2026-02-19T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.067280 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.067348 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.067370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.067400 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.067455 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.070771 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:14:16.815876848 +0000 UTC Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.169476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.169523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.169535 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.169552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.169564 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.272574 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.272722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.272764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.272811 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.272824 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.376117 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.376158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.376170 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.376185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.376197 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.479104 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.479364 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.479516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.479623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.479746 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.581971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.582012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.582023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.582038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.582048 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.684275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.684621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.684760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.684877 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.684993 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.685749 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.686025 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:10:16.685995761 +0000 UTC m=+31.347098999 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.786252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.786503 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.786677 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.786743 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:16.786720454 +0000 UTC m=+31.447823682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.786911 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.786990 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.786930 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787177 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787001 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787249 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787267 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787046 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787333 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:16.787310522 +0000 UTC m=+31.448413760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787392 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:16.787365804 +0000 UTC m=+31.448469142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787224 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.787461 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:16.787452036 +0000 UTC m=+31.448555274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.787605 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.787625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.787636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.787654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.787665 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.890106 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.890153 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.890169 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.890199 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.890212 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.947539 4861 csr.go:261] certificate signing request csr-2rgmj is approved, waiting to be issued Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.976677 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.976748 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.976751 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.977256 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.977213 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:12 crc kubenswrapper[4861]: E0219 13:10:12.977430 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.984322 4861 csr.go:257] certificate signing request csr-2rgmj is issued Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.992717 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.992759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.992769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.992788 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.992798 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:12Z","lastTransitionTime":"2026-02-19T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.996664 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tcrxv"] Feb 19 13:10:12 crc kubenswrapper[4861]: I0219 13:10:12.997036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.001199 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.001267 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.001279 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.040502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.058931 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.072541 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:07:16.748894233 +0000 UTC Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.079254 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.089581 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a652269a-f440-45e1-bae5-29a3dfab4f51-hosts-file\") pod \"node-resolver-tcrxv\" (UID: \"a652269a-f440-45e1-bae5-29a3dfab4f51\") " pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.089952 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5d7\" (UniqueName: \"kubernetes.io/projected/a652269a-f440-45e1-bae5-29a3dfab4f51-kube-api-access-xx5d7\") pod \"node-resolver-tcrxv\" (UID: \"a652269a-f440-45e1-bae5-29a3dfab4f51\") " pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.094807 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.094902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.094962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.095035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.095111 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.111612 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.125115 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.134729 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.140580 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.152691 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.161889 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.177012 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.190995 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a652269a-f440-45e1-bae5-29a3dfab4f51-hosts-file\") pod \"node-resolver-tcrxv\" (UID: \"a652269a-f440-45e1-bae5-29a3dfab4f51\") " pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.191082 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5d7\" (UniqueName: \"kubernetes.io/projected/a652269a-f440-45e1-bae5-29a3dfab4f51-kube-api-access-xx5d7\") pod \"node-resolver-tcrxv\" (UID: \"a652269a-f440-45e1-bae5-29a3dfab4f51\") " pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.191489 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a652269a-f440-45e1-bae5-29a3dfab4f51-hosts-file\") pod \"node-resolver-tcrxv\" (UID: \"a652269a-f440-45e1-bae5-29a3dfab4f51\") " pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.192994 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.197834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.197902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.197912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.197929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.197941 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.209012 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.216491 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5d7\" (UniqueName: \"kubernetes.io/projected/a652269a-f440-45e1-bae5-29a3dfab4f51-kube-api-access-xx5d7\") pod \"node-resolver-tcrxv\" (UID: \"a652269a-f440-45e1-bae5-29a3dfab4f51\") " pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.222890 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.233168 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.244630 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.261449 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.276647 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:13Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.300952 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.300995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.301007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.301025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.301036 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.310285 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tcrxv" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.404370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.404467 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.404480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.404503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.404515 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.507237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.507673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.507684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.507705 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.507719 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.610050 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.610080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.610089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.610105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.610116 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.712403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.712460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.712468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.712484 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.712492 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.814263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.814305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.814316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.814333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.814344 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.916975 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.917020 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.917030 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.917049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.917061 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:13Z","lastTransitionTime":"2026-02-19T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.985770 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 13:05:12 +0000 UTC, rotation deadline is 2026-12-26 16:16:31.183355745 +0000 UTC Feb 19 13:10:13 crc kubenswrapper[4861]: I0219 13:10:13.985812 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7443h6m17.197545951s for next certificate rotation Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.005719 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lwqpq"] Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.006079 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.006143 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ffskh"] Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.006542 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.007547 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mhpx8"] Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.008144 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.010321 4861 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.010400 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.010457 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.010596 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.010711 4861 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.010743 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.010759 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.010845 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.012202 4861 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.012227 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.013208 4861 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.013257 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.013328 4861 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.013341 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.013386 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.013786 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.015081 4861 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.015110 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.020942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.021010 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.021026 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.021049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.021061 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.028759 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.047060 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.058949 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.070969 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.073726 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:20:55.260532335 +0000 UTC Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.082139 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.099084 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cnibin\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101712 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-daemon-config\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101755 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-multus-certs\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101788 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-etc-kubernetes\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101900 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-system-cni-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101958 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-os-release\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.101990 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cni-binary-copy\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102025 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-system-cni-dir\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102054 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-cnibin\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102078 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-os-release\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102100 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-hostroot\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102181 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/478e6971-05ac-43f2-99a2-cd93644c6227-rootfs\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102257 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-conf-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102307 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfql\" (UniqueName: \"kubernetes.io/projected/478e6971-05ac-43f2-99a2-cd93644c6227-kube-api-access-4cfql\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102343 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-k8s-cni-cncf-io\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102378 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/478e6971-05ac-43f2-99a2-cd93644c6227-mcd-auth-proxy-config\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102417 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2rx\" (UniqueName: \"kubernetes.io/projected/f491f78c-b995-44c7-8395-41d8a2c4cf29-kube-api-access-4z2rx\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102482 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqsq\" (UniqueName: \"kubernetes.io/projected/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-kube-api-access-xxqsq\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102543 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-cni-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102588 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-cni-bin\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102638 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-netns\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-kubelet\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-socket-dir-parent\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102806 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/478e6971-05ac-43f2-99a2-cd93644c6227-proxy-tls\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102835 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102856 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.102884 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-cni-multus\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.112205 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.123888 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.123940 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.123957 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.123980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.123997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.131988 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.138770 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tcrxv" event={"ID":"a652269a-f440-45e1-bae5-29a3dfab4f51","Type":"ContainerStarted","Data":"8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.138846 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tcrxv" event={"ID":"a652269a-f440-45e1-bae5-29a3dfab4f51","Type":"ContainerStarted","Data":"7a22a362bac81800efff5cfeab4fa228b3dd2f5cd2b8d15b809f80ec07021799"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.151846 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.177528 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.197743 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfql\" (UniqueName: \"kubernetes.io/projected/478e6971-05ac-43f2-99a2-cd93644c6227-kube-api-access-4cfql\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/478e6971-05ac-43f2-99a2-cd93644c6227-mcd-auth-proxy-config\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204351 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2rx\" (UniqueName: \"kubernetes.io/projected/f491f78c-b995-44c7-8395-41d8a2c4cf29-kube-api-access-4z2rx\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-k8s-cni-cncf-io\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204806 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-cni-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204837 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-cni-bin\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqsq\" (UniqueName: \"kubernetes.io/projected/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-kube-api-access-xxqsq\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.204907 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-k8s-cni-cncf-io\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205071 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-cni-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205087 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-cni-bin\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205099 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-netns\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205141 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-netns\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205150 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-kubelet\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205182 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-kubelet\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-socket-dir-parent\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205246 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/478e6971-05ac-43f2-99a2-cd93644c6227-proxy-tls\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205280 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-socket-dir-parent\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205311 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205366 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-cni-multus\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cnibin\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205608 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-daemon-config\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205650 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-multus-certs\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205689 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205719 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-system-cni-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cnibin\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205751 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-os-release\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205760 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-run-multus-certs\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cni-binary-copy\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205811 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-etc-kubernetes\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205462 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-host-var-lib-cni-multus\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205910 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-system-cni-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205364 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/478e6971-05ac-43f2-99a2-cd93644c6227-mcd-auth-proxy-config\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-os-release\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205977 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-etc-kubernetes\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.205849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-system-cni-dir\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206072 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-system-cni-dir\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206085 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-cnibin\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-os-release\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206150 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-cnibin\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206182 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-hostroot\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206236 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-os-release\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206247 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/478e6971-05ac-43f2-99a2-cd93644c6227-rootfs\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206280 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/478e6971-05ac-43f2-99a2-cd93644c6227-rootfs\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206245 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-hostroot\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206295 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-conf-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206335 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-conf-dir\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206407 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-multus-daemon-config\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.206466 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f491f78c-b995-44c7-8395-41d8a2c4cf29-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.210651 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/478e6971-05ac-43f2-99a2-cd93644c6227-proxy-tls\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.221924 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.227034 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.227080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.227096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.227118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.227131 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.240248 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.246366 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfql\" (UniqueName: \"kubernetes.io/projected/478e6971-05ac-43f2-99a2-cd93644c6227-kube-api-access-4cfql\") pod \"machine-config-daemon-lwqpq\" (UID: \"478e6971-05ac-43f2-99a2-cd93644c6227\") " pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.253372 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.266315 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.321857 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.324500 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.331095 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.331137 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.331153 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.331175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.331204 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.335756 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478e6971_05ac_43f2_99a2_cd93644c6227.slice/crio-57993d5c620f0dbbe901a01e07b729ba24bbd94bda05997b5808e4f8ca1bd788 WatchSource:0}: Error finding container 57993d5c620f0dbbe901a01e07b729ba24bbd94bda05997b5808e4f8ca1bd788: Status 404 returned error can't find the container with id 57993d5c620f0dbbe901a01e07b729ba24bbd94bda05997b5808e4f8ca1bd788 Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.344386 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.366844 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.379009 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.389152 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wb9bn"] Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.390135 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.392156 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.392511 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.392571 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.392782 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.392863 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.393302 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.394181 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.399012 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.417628 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.429984 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.435629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.435682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.435694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.435713 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.435728 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.442054 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.456566 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.468843 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.489857 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.505328 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508162 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-script-lib\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508247 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-netd\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508319 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-systemd\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508359 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-etc-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508453 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-systemd-units\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508491 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508564 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-bin\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508653 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovn-node-metrics-cert\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508734 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508811 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-env-overrides\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.508945 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-node-log\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509041 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-slash\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509108 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-ovn\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-config\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509219 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-var-lib-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-log-socket\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509294 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9jb\" (UniqueName: \"kubernetes.io/projected/2b4f740d-a1ca-450f-adad-afb42efe0c76-kube-api-access-4f9jb\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-netns\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-kubelet\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.509400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-ovn-kubernetes\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.517804 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.529676 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.538941 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.538979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.538990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.539006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.539018 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.544883 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.559693 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.570731 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.594199 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.599379 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.603198 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.605858 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610568 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-slash\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610605 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-ovn\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-config\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-var-lib-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610668 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-log-socket\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9jb\" (UniqueName: \"kubernetes.io/projected/2b4f740d-a1ca-450f-adad-afb42efe0c76-kube-api-access-4f9jb\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-slash\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610718 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-netns\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610743 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-kubelet\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610751 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-ovn\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-ovn-kubernetes\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610794 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-kubelet\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610799 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-script-lib\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610818 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-log-socket\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610857 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-netd\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610833 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-netd\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610868 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-netns\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610919 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-systemd\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-systemd\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610870 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-ovn-kubernetes\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610978 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-etc-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.610984 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-var-lib-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611006 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-systemd-units\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611008 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-etc-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611069 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-systemd-units\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611070 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-bin\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611090 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-bin\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611116 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-openvswitch\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovn-node-metrics-cert\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611241 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-env-overrides\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611266 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611297 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-node-log\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611375 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611388 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-node-log\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611611 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-config\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611671 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-script-lib\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.611877 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-env-overrides\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.614301 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovn-node-metrics-cert\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.617454 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.629208 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.631508 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9jb\" (UniqueName: \"kubernetes.io/projected/2b4f740d-a1ca-450f-adad-afb42efe0c76-kube-api-access-4f9jb\") pod \"ovnkube-node-wb9bn\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.641748 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.641813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.641826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.641850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.641863 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.645831 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.664643 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.684039 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.696890 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.709268 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.711951 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.722797 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: W0219 13:10:14.724121 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4f740d_a1ca_450f_adad_afb42efe0c76.slice/crio-60835a65e091c0f93a09823014ee72c1964da4ba412855e5d2bcdd7f35b22871 WatchSource:0}: Error finding container 60835a65e091c0f93a09823014ee72c1964da4ba412855e5d2bcdd7f35b22871: Status 404 returned error can't find the container with id 60835a65e091c0f93a09823014ee72c1964da4ba412855e5d2bcdd7f35b22871 Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.739113 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.743645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.743695 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.743708 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.743728 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.743740 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.755977 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.780522 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.796524 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.810793 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.827258 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.834364 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.836701 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.846818 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.846866 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.846876 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.846893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.846904 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.847823 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.864737 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.887899 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.891987 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.899777 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.914124 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.930303 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.944375 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.949157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.949204 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.949215 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.949235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.949246 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:14Z","lastTransitionTime":"2026-02-19T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.960937 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.971144 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.976997 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.977051 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.977133 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.977086 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.977233 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:14 crc kubenswrapper[4861]: E0219 13:10:14.977319 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:14 crc kubenswrapper[4861]: I0219 13:10:14.983791 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:14Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.051730 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.051775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.051787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.051807 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.051817 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.056770 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.074484 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:44:21.285569041 +0000 UTC Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.144553 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" exitCode=0 Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.144652 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.144728 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"60835a65e091c0f93a09823014ee72c1964da4ba412855e5d2bcdd7f35b22871"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.146596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.146662 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.146678 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"57993d5c620f0dbbe901a01e07b729ba24bbd94bda05997b5808e4f8ca1bd788"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.154631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.154684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.154702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.154724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.154744 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.162163 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.178513 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.196545 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: E0219 13:10:15.205661 4861 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 19 13:10:15 crc kubenswrapper[4861]: E0219 13:10:15.205772 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-binary-copy podName:f491f78c-b995-44c7-8395-41d8a2c4cf29 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:15.705745083 +0000 UTC m=+30.366848311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-binary-copy") pod "multus-additional-cni-plugins-mhpx8" (UID: "f491f78c-b995-44c7-8395-41d8a2c4cf29") : failed to sync configmap cache: timed out waiting for the condition Feb 19 13:10:15 crc kubenswrapper[4861]: E0219 13:10:15.206043 4861 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 19 13:10:15 crc kubenswrapper[4861]: E0219 13:10:15.206099 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cni-binary-copy podName:1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb nodeName:}" failed. No retries permitted until 2026-02-19 13:10:15.706085113 +0000 UTC m=+30.367188341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cni-binary-copy") pod "multus-ffskh" (UID: "1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb") : failed to sync configmap cache: timed out waiting for the condition Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.216716 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.231543 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.248468 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.258512 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.258588 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.258599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.258620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.258630 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.270271 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.287659 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.301912 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.309167 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.315242 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.317176 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.318994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqsq\" (UniqueName: \"kubernetes.io/projected/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-kube-api-access-xxqsq\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.321177 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2rx\" (UniqueName: \"kubernetes.io/projected/f491f78c-b995-44c7-8395-41d8a2c4cf29-kube-api-access-4z2rx\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.329605 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.359021 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.361193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.361221 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.361230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.361245 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.361255 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.398223 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.437869 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.448316 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.463505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.463542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.463553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.463572 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.463582 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.496032 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.535168 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.566709 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.566757 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.566766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.566781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.566791 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.584508 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.615210 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.657877 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.669614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.669654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.669667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.669686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.669698 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.696227 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.723940 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.724592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cni-binary-copy\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.725363 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f491f78c-b995-44c7-8395-41d8a2c4cf29-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhpx8\" (UID: \"f491f78c-b995-44c7-8395-41d8a2c4cf29\") " pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.725474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb-cni-binary-copy\") pod \"multus-ffskh\" (UID: \"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\") " pod="openshift-multus/multus-ffskh" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.737898 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.758951 4861 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 13:10:15 crc kubenswrapper[4861]: W0219 13:10:15.760384 4861 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Feb 19 13:10:15 crc kubenswrapper[4861]: W0219 13:10:15.761107 4861 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Feb 19 13:10:15 crc kubenswrapper[4861]: W0219 13:10:15.761269 4861 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Feb 19 13:10:15 crc kubenswrapper[4861]: W0219 13:10:15.761598 4861 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 19 13:10:15 crc kubenswrapper[4861]: W0219 13:10:15.761618 4861 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Feb 19 13:10:15 crc kubenswrapper[4861]: W0219 13:10:15.761643 4861 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.777160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.777191 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.777200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.777215 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.777224 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.782632 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.815672 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.829084 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ffskh" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.838215 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.845116 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gwkfm"] Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.845489 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.869870 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.876258 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.904923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.904959 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.904974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.904995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.905006 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:15Z","lastTransitionTime":"2026-02-19T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.907514 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.914196 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.930942 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e143d63f-12e2-4d59-9d2d-11057486b27e-serviceca\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.931007 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9xm\" (UniqueName: \"kubernetes.io/projected/e143d63f-12e2-4d59-9d2d-11057486b27e-kube-api-access-2p9xm\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.931041 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e143d63f-12e2-4d59-9d2d-11057486b27e-host\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.934701 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 13:10:15 crc kubenswrapper[4861]: I0219 13:10:15.979368 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.010650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.010697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.010712 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.010733 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.010745 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.028974 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.031971 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9xm\" (UniqueName: \"kubernetes.io/projected/e143d63f-12e2-4d59-9d2d-11057486b27e-kube-api-access-2p9xm\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.032014 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e143d63f-12e2-4d59-9d2d-11057486b27e-host\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.032067 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e143d63f-12e2-4d59-9d2d-11057486b27e-serviceca\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.033134 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e143d63f-12e2-4d59-9d2d-11057486b27e-serviceca\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.033327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e143d63f-12e2-4d59-9d2d-11057486b27e-host\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.075260 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:15:49.1798732 +0000 UTC Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.077932 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9xm\" (UniqueName: \"kubernetes.io/projected/e143d63f-12e2-4d59-9d2d-11057486b27e-kube-api-access-2p9xm\") pod \"node-ca-gwkfm\" (UID: \"e143d63f-12e2-4d59-9d2d-11057486b27e\") " pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.079397 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.114503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.114697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.114769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.114869 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.114961 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.116150 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.149891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerStarted","Data":"0d0cb982e00fb9bb1f2bfbe470608cc73a86fb26191c8977d0e8154965cca398"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.154442 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.154497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.154508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.154517 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.157139 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerStarted","Data":"2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.157190 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerStarted","Data":"ffc5f491d9ccc92b5b1e33f2bc9dc4d8dd239872d9d89431efe1f5b42a69ad93"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.164994 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.200280 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.217757 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.217799 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.217814 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.217837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.217853 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.224152 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gwkfm" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.239804 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: W0219 13:10:16.253677 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode143d63f_12e2_4d59_9d2d_11057486b27e.slice/crio-a6346ab4f145b741985b601f18f55844bf61518cf9ce72088258722788e15f86 WatchSource:0}: Error finding container a6346ab4f145b741985b601f18f55844bf61518cf9ce72088258722788e15f86: Status 404 returned error can't find the container with id a6346ab4f145b741985b601f18f55844bf61518cf9ce72088258722788e15f86 Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.282063 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.318959 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.320788 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.320821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.320833 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.320850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.320862 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.358970 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.402252 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.424596 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.424648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.424673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.424689 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.424973 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.443116 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.475601 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.515978 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.529633 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.529671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.529684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.529704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.529719 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.561992 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.596695 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.632976 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.633040 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.633054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.633114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.633130 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.641668 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.678389 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.720141 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.736214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.736289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.736306 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.736326 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.736341 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.736914 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.737086 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:10:24.737058528 +0000 UTC m=+39.398161746 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.768562 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.797320 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.837998 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.838506 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.838731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.838213 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.838935 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.838954 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839022 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:24.838999547 +0000 UTC m=+39.500102785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.838817 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839069 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839082 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839120 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:24.83910851 +0000 UTC m=+39.500211738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.838894 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839164 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:24.839156142 +0000 UTC m=+39.500259370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839203 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.839330 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:24.839305126 +0000 UTC m=+39.500408354 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.839472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.839687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.839717 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.839729 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.839748 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.839762 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.840052 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.885922 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.918198 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.941900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.941962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.941981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.942006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.942023 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:16Z","lastTransitionTime":"2026-02-19T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.961991 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.976574 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.976620 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.976732 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:16 crc kubenswrapper[4861]: I0219 13:10:16.976638 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.976866 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:16 crc kubenswrapper[4861]: E0219 13:10:16.977123 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.003158 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.040866 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.044401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.044448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.044466 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.044482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.044491 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.076387 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:16:18.771131995 +0000 UTC Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.084516 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.120600 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.148868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.148946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.148968 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.148998 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.149018 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.155810 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.161897 4861 generic.go:334] "Generic (PLEG): container finished" podID="f491f78c-b995-44c7-8395-41d8a2c4cf29" containerID="64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4" exitCode=0 Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.162851 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerDied","Data":"64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.164949 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gwkfm" event={"ID":"e143d63f-12e2-4d59-9d2d-11057486b27e","Type":"ContainerStarted","Data":"5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.165019 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gwkfm" event={"ID":"e143d63f-12e2-4d59-9d2d-11057486b27e","Type":"ContainerStarted","Data":"a6346ab4f145b741985b601f18f55844bf61518cf9ce72088258722788e15f86"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.173632 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.173704 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.206997 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.239854 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.253243 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.253298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.253315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.253340 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.253356 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.278391 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.326412 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.354095 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.355284 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.355308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.355321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.355337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.355347 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.397170 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.434321 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.456852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.456885 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.456896 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.456911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.456925 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.483012 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.489457 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.538702 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.558928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.558980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.558994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.559015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.559029 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.579018 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.625754 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.628611 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.661797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.661842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.661854 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.661873 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.661888 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.676550 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.716862 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.765764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.765816 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.765827 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.765846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.765856 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.773469 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.804495 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.841675 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.868396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.868480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.868498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.868525 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.868540 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.880594 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.923390 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.964979 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.971179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.971225 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.971236 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.971254 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.971264 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:17Z","lastTransitionTime":"2026-02-19T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:17 crc kubenswrapper[4861]: I0219 13:10:17.996569 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:17Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.056112 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.074257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.074328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.074348 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.074376 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.074394 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.077534 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:22:29.994306971 +0000 UTC Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.082459 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.117611 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.157166 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.169340 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.176904 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.176935 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.176946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.176965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.176977 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.179891 4861 generic.go:334] "Generic (PLEG): container finished" podID="f491f78c-b995-44c7-8395-41d8a2c4cf29" containerID="3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10" exitCode=0 Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.179931 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerDied","Data":"3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.218845 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.258911 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.279314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.279344 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.279353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.279366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.279374 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.301582 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.338079 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.375885 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.381524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.381565 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.381577 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.381595 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.381606 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.415143 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.457845 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.484704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.484789 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.484803 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.484820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.484830 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.501228 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.534563 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.577750 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.587752 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.587781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.587790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.587804 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.587813 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.608351 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.636125 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.677208 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.689912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.689956 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.689970 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.689988 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.690001 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.717766 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.757135 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.795814 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.815941 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.817079 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.817103 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.817111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.817125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.817134 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.857146 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.889258 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.917334 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.919162 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.919205 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.919217 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.919236 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.919245 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:18Z","lastTransitionTime":"2026-02-19T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.962286 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.976596 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.976613 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:18 crc kubenswrapper[4861]: I0219 13:10:18.976595 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:18 crc kubenswrapper[4861]: E0219 13:10:18.976710 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:18 crc kubenswrapper[4861]: E0219 13:10:18.976778 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:18 crc kubenswrapper[4861]: E0219 13:10:18.976854 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.022897 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.022956 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.022974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.023001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.023015 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.077718 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:07:05.326629925 +0000 UTC Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.125278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.125322 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.125332 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.125351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.125362 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.186895 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.189209 4861 generic.go:334] "Generic (PLEG): container finished" podID="f491f78c-b995-44c7-8395-41d8a2c4cf29" containerID="bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef" exitCode=0 Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.189253 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerDied","Data":"bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.202149 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.219632 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.227647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.227693 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.227705 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.227722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.227733 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.232583 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.245238 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.248846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.248886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.248895 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.248914 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.248923 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.270500 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: E0219 13:10:19.290148 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.304756 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.305073 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.305093 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.305102 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.305119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.305129 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: E0219 13:10:19.326805 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.337750 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.337795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.337824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.337841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.337849 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.339810 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: E0219 13:10:19.352181 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.353923 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.355621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.355643 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.355672 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.355703 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.355713 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.367674 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: E0219 13:10:19.368801 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.372958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.372994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.373004 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.373028 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.373038 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.383848 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: E0219 13:10:19.386864 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: E0219 13:10:19.386991 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.389125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.389255 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.389285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.389321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.389342 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.396785 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.440699 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.483535 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.492236 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.492277 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.492290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.492310 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.492320 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.517222 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.594581 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.594620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.594629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.594646 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.594659 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.700606 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.700646 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.700660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.700682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.700695 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.804715 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.804766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.804779 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.804799 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.804817 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.908544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.908601 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.908621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.908647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:19 crc kubenswrapper[4861]: I0219 13:10:19.908666 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:19Z","lastTransitionTime":"2026-02-19T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.011774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.012173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.012185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.012202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.012212 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.078506 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:34:07.203352681 +0000 UTC Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.114770 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.114816 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.114826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.114848 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.114865 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.197031 4861 generic.go:334] "Generic (PLEG): container finished" podID="f491f78c-b995-44c7-8395-41d8a2c4cf29" containerID="eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286" exitCode=0 Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.197127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerDied","Data":"eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.218005 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.218065 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.218078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.218099 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.218111 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.231171 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.252844 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.268528 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.293028 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.313920 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.320961 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.321042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.321062 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.321091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.321112 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.329608 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.342080 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.362262 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.375571 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.389589 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.402161 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.416576 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.423954 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.423995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.424006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.424023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.424034 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.430002 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.444084 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.527133 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.527166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.527176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.527191 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.527201 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.629659 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.629712 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.629727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.629751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.629766 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.731714 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.731762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.731774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.731790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.731801 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.834100 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.834140 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.834149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.834167 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.834178 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.936302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.936332 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.936340 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.936353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.936362 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:20Z","lastTransitionTime":"2026-02-19T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.976986 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.976995 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:20 crc kubenswrapper[4861]: E0219 13:10:20.977118 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:20 crc kubenswrapper[4861]: I0219 13:10:20.977189 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:20 crc kubenswrapper[4861]: E0219 13:10:20.977288 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:20 crc kubenswrapper[4861]: E0219 13:10:20.977476 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.038329 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.038375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.038387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.038407 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.038439 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.079660 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:20:09.919254419 +0000 UTC Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.140587 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.140633 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.140644 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.140658 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.140667 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.204860 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.205135 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.210395 4861 generic.go:334] "Generic (PLEG): container finished" podID="f491f78c-b995-44c7-8395-41d8a2c4cf29" containerID="ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464" exitCode=0 Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.210464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerDied","Data":"ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.219508 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.229828 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.240693 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.242725 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.242774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.242786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.242806 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.242821 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.244259 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.256335 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.272117 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.287545 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.301626 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.314629 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.338226 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.349116 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.349160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.349179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.349196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.349205 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.353088 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.364526 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.375324 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.389212 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.400209 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.418688 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.431796 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.447277 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.451669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.451700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.451710 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.451724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.451734 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.461265 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.474049 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.487319 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.503297 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.527106 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.540576 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.554030 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.554526 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.554567 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.554578 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.554593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.554603 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.568188 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.583847 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.597766 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.616295 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.657886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.657930 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.657943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.657964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.657976 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.761050 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.761096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.761109 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.761125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.761138 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.864489 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.864524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.864533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.864548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.864560 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.967303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.967346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.967358 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.967372 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:21 crc kubenswrapper[4861]: I0219 13:10:21.967381 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:21Z","lastTransitionTime":"2026-02-19T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.069469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.069506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.069518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.069533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.069543 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.080708 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:01:09.542659376 +0000 UTC Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.172057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.172113 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.172135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.172162 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.172181 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.218405 4861 generic.go:334] "Generic (PLEG): container finished" podID="f491f78c-b995-44c7-8395-41d8a2c4cf29" containerID="d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e" exitCode=0 Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.218462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerDied","Data":"d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.218672 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.219369 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.240467 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.254568 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.290269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.290303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.290314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.290330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.290343 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.291519 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.299583 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.315099 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.334566 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.350172 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.362117 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.375131 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.386282 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.392785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.392825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.392838 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.392858 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.392869 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.398239 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.408067 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.417448 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.428495 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.437646 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.447471 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.458568 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.471689 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.487068 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.494667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.494702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.494714 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.494730 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.494740 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.496990 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.506347 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.515689 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.527164 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.534680 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.544892 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.552500 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.563552 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.574170 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.585345 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.596907 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.596942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.596953 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.596967 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.596978 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.699021 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.699289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.699386 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.699526 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.699613 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.801723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.802035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.802118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.802518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.802815 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.905514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.905573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.905585 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.905602 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.905615 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:22Z","lastTransitionTime":"2026-02-19T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.976446 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.976470 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:22 crc kubenswrapper[4861]: I0219 13:10:22.976492 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:22 crc kubenswrapper[4861]: E0219 13:10:22.977074 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:22 crc kubenswrapper[4861]: E0219 13:10:22.977160 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:22 crc kubenswrapper[4861]: E0219 13:10:22.976922 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.010167 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.010405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.010561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.010681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.010772 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.081444 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:12:24.972140707 +0000 UTC Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.113292 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.113518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.113607 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.113670 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.113732 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.215450 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.215487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.215496 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.215511 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.215523 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.224361 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" event={"ID":"f491f78c-b995-44c7-8395-41d8a2c4cf29","Type":"ContainerStarted","Data":"1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.224415 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.238457 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.249123 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.263481 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.281490 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.295793 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.307097 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.360546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.360600 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.360614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.360635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.360654 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.365484 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.379071 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.396347 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.408439 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.423236 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.432723 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.444611 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.452856 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.462523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.462560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.462569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.462587 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.462597 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.565698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.565746 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.565757 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.565773 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.565782 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.668600 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.668647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.668657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.668673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.668683 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.770936 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.770974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.770986 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.771002 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.771012 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.873265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.873294 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.873302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.873315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.873325 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.975683 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.975723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.975732 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.975746 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:23 crc kubenswrapper[4861]: I0219 13:10:23.975755 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:23Z","lastTransitionTime":"2026-02-19T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.077500 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.077543 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.077553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.077568 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.077577 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.081548 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:56:45.370069346 +0000 UTC Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.179678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.179717 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.179731 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.179747 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.179757 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.228523 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/0.log" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.231310 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f" exitCode=1 Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.231362 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.231949 4861 scope.go:117] "RemoveContainer" containerID="9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.248604 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.261383 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.274261 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.281498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.281609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.281706 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.281805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.281887 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.287193 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.302676 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.315284 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.327412 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.345115 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.363546 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0219 13:10:23.662366 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 13:10:23.662399 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 13:10:23.662469 6152 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 13:10:23.662482 6152 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 13:10:23.662505 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 13:10:23.662521 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 13:10:23.662529 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 13:10:23.662558 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 13:10:23.662567 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 13:10:23.662580 6152 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 13:10:23.662586 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 13:10:23.662591 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 13:10:23.662626 6152 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 13:10:23.662665 6152 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.375294 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.386522 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.386631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.386835 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.387008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.387117 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.387293 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.397707 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.448402 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.489909 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.489943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.489950 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.489964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.489972 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.502079 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.592742 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.592780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.592791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.592806 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.592817 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.695173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.695234 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.695251 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.695273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.695290 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.797706 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.797750 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.797767 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.797787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.797800 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.831404 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.831726 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:10:40.831703193 +0000 UTC m=+55.492806421 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.899912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.899943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.899951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.899965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.899975 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:24Z","lastTransitionTime":"2026-02-19T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.931956 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.931992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.932011 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.932030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932128 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932144 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932165 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932178 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:40.932162998 +0000 UTC m=+55.593266226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932175 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932211 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932221 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932255 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:40.93224504 +0000 UTC m=+55.593348268 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932178 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932296 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:40.932289121 +0000 UTC m=+55.593392359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932225 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.932375 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:40.932359624 +0000 UTC m=+55.593462852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.977015 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.977015 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:24 crc kubenswrapper[4861]: I0219 13:10:24.977037 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.977179 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.977294 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:24 crc kubenswrapper[4861]: E0219 13:10:24.977351 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.002295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.002342 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.002353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.002374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.002391 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.082614 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 20:17:20.504095699 +0000 UTC Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.104114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.104149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.104159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.104172 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.104180 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.207805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.207846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.207856 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.207871 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.207880 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.238205 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/1.log" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.239032 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/0.log" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.243775 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74" exitCode=1 Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.243845 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.243911 4861 scope.go:117] "RemoveContainer" containerID="9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.246390 4861 scope.go:117] "RemoveContainer" containerID="a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74" Feb 19 13:10:25 crc kubenswrapper[4861]: E0219 13:10:25.247298 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.270090 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.288232 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.304021 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.310403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.310472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.310493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.310515 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.310531 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.318496 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.336900 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.356030 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.372580 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.405934 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0219 13:10:23.662366 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 13:10:23.662399 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 13:10:23.662469 6152 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 13:10:23.662482 6152 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 13:10:23.662505 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 13:10:23.662521 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 13:10:23.662529 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 13:10:23.662558 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 13:10:23.662567 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 13:10:23.662580 6152 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 13:10:23.662586 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 13:10:23.662591 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 13:10:23.662626 6152 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 13:10:23.662665 6152 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.413405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.413493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.413516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.413542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.413559 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.423078 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.436800 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.459544 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.472019 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.487264 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.499411 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.516793 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.516929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.516948 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.516998 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.517016 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.619773 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.620059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.620125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.620190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.620246 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.724250 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.724344 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.724369 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.724405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.724482 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.827529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.827808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.827890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.827962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.828110 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.931293 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.931337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.931349 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.931366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.931380 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:25Z","lastTransitionTime":"2026-02-19T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.977329 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.992947 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb"] Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.993444 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.994303 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.998124 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 13:10:25 crc kubenswrapper[4861]: I0219 13:10:25.998147 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.013617 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.035912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.035999 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.036016 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.036033 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.036045 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.037293 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.057689 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.080813 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.083467 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:21:15.903130477 +0000 UTC Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.100289 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.117224 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.134988 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.139967 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.140017 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.140042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.140069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.140086 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.144661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de22a290-251d-48e9-95e8-f4dbebd04451-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.144737 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdl8z\" (UniqueName: \"kubernetes.io/projected/de22a290-251d-48e9-95e8-f4dbebd04451-kube-api-access-rdl8z\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.144995 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de22a290-251d-48e9-95e8-f4dbebd04451-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.145109 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de22a290-251d-48e9-95e8-f4dbebd04451-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.155306 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.172077 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.197931 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0219 13:10:23.662366 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 13:10:23.662399 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 13:10:23.662469 6152 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 13:10:23.662482 6152 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 13:10:23.662505 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 13:10:23.662521 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 13:10:23.662529 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 13:10:23.662558 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 13:10:23.662567 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 13:10:23.662580 6152 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 13:10:23.662586 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 13:10:23.662591 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 13:10:23.662626 6152 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 13:10:23.662665 6152 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.215361 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.229677 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.242508 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.242555 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.242569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.242589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.242602 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.248956 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de22a290-251d-48e9-95e8-f4dbebd04451-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.249017 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de22a290-251d-48e9-95e8-f4dbebd04451-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.248983 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.249046 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdl8z\" (UniqueName: \"kubernetes.io/projected/de22a290-251d-48e9-95e8-f4dbebd04451-kube-api-access-rdl8z\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.249082 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de22a290-251d-48e9-95e8-f4dbebd04451-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.250085 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de22a290-251d-48e9-95e8-f4dbebd04451-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.250122 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de22a290-251d-48e9-95e8-f4dbebd04451-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.253970 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/1.log" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.255233 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de22a290-251d-48e9-95e8-f4dbebd04451-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.257172 4861 scope.go:117] "RemoveContainer" containerID="a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74" Feb 19 13:10:26 crc kubenswrapper[4861]: E0219 13:10:26.257328 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.262352 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.267109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdl8z\" (UniqueName: \"kubernetes.io/projected/de22a290-251d-48e9-95e8-f4dbebd04451-kube-api-access-rdl8z\") pod \"ovnkube-control-plane-749d76644c-8rwhb\" (UID: \"de22a290-251d-48e9-95e8-f4dbebd04451\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.275311 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.287079 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.301865 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.311658 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.313048 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.332798 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: W0219 13:10:26.337112 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde22a290_251d_48e9_95e8_f4dbebd04451.slice/crio-cb7f4e6e48a7d220410f932b96fdd0df9aecaddfa88e07d6df3237f3baa1599d WatchSource:0}: Error finding container cb7f4e6e48a7d220410f932b96fdd0df9aecaddfa88e07d6df3237f3baa1599d: Status 404 returned error can't find the container with id cb7f4e6e48a7d220410f932b96fdd0df9aecaddfa88e07d6df3237f3baa1599d Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.345676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.345724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.345739 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.345761 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.345776 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.346632 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.362888 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.377405 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.391712 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.409057 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.423042 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.438299 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.448119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.448142 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.448151 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.448167 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.448177 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.457398 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e670753bfeaaa4550463db22e6e3cf22161cd5a0e55004f2d7f13bf8d9d6b6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0219 13:10:23.662366 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 13:10:23.662399 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 13:10:23.662469 6152 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 13:10:23.662482 6152 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 13:10:23.662505 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 13:10:23.662521 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 13:10:23.662529 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 13:10:23.662558 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 13:10:23.662567 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 13:10:23.662580 6152 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 13:10:23.662586 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 13:10:23.662591 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 13:10:23.662626 6152 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 13:10:23.662665 6152 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.467772 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.479080 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.489000 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.504741 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.518840 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.532700 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.545806 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.550330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.550389 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.550404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.550452 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.550470 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.558308 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.579735 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.594571 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.609928 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.628994 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.644961 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.653538 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.653576 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.653590 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.653615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.653631 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.667046 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.682070 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.696954 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.756011 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.756051 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.756061 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.756078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.756089 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.857841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.858101 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.858166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.858233 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.858299 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.960753 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.960783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.960791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.960804 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.960812 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:26Z","lastTransitionTime":"2026-02-19T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.976770 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.976843 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:26 crc kubenswrapper[4861]: I0219 13:10:26.976985 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:26 crc kubenswrapper[4861]: E0219 13:10:26.977568 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:26 crc kubenswrapper[4861]: E0219 13:10:26.977379 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:26 crc kubenswrapper[4861]: E0219 13:10:26.977745 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.063264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.063495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.063582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.063661 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.063721 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.084525 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:53:49.722870076 +0000 UTC Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.166773 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.166815 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.166825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.166845 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.166858 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.261878 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.263636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.264777 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.265797 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" event={"ID":"de22a290-251d-48e9-95e8-f4dbebd04451","Type":"ContainerStarted","Data":"452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.265826 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" event={"ID":"de22a290-251d-48e9-95e8-f4dbebd04451","Type":"ContainerStarted","Data":"3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.265838 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" event={"ID":"de22a290-251d-48e9-95e8-f4dbebd04451","Type":"ContainerStarted","Data":"cb7f4e6e48a7d220410f932b96fdd0df9aecaddfa88e07d6df3237f3baa1599d"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.268399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.268436 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.268448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.268460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.268469 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.277501 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.291682 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.305373 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.324194 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.339677 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.361506 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.371609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.371661 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.371675 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.371710 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.371726 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.384329 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.407151 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.420474 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.436400 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.451869 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.465528 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.474676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.474737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.474749 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.474766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.474776 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.476003 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.488517 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.514243 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.530316 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.544148 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.560007 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.576472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.576550 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.576570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.576598 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.576616 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.586169 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.601338 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.620057 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.634353 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.648714 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.660215 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.671925 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.678959 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.678994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.679008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.679029 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.679043 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.686748 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.698384 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.714003 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.728017 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.740135 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.781062 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.781105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.781115 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.781127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.781139 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.858515 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kjwt5"] Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.859541 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:27 crc kubenswrapper[4861]: E0219 13:10:27.859677 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.882174 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.883824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.883862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.883872 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.883890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.883905 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.905586 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.928309 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.947482 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:27Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.967637 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.967695 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4bw\" (UniqueName: \"kubernetes.io/projected/163fc0e2-f792-4062-88a7-3ed764a08103-kube-api-access-dn4bw\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.986158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.986193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.986205 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.986220 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:27 crc kubenswrapper[4861]: I0219 13:10:27.986232 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:27Z","lastTransitionTime":"2026-02-19T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.013734 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.042332 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.064204 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.068659 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn4bw\" (UniqueName: \"kubernetes.io/projected/163fc0e2-f792-4062-88a7-3ed764a08103-kube-api-access-dn4bw\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.068730 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.068822 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.068868 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:28.568854794 +0000 UTC m=+43.229958022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.081381 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.084795 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:15:20.572957461 +0000 UTC Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.089133 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.089174 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.089183 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.089200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.089209 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.094334 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn4bw\" (UniqueName: \"kubernetes.io/projected/163fc0e2-f792-4062-88a7-3ed764a08103-kube-api-access-dn4bw\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.096266 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.111722 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.124109 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.138771 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.150105 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.163951 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.177057 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.191808 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:28Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.192264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.192304 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.192317 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.192338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.192351 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.295001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.295078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.295097 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.295125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.295144 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.398651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.398704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.398717 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.398740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.398755 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.502686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.502745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.502787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.502813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.502829 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.574159 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.574386 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.574514 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:29.574489906 +0000 UTC m=+44.235593144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.606382 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.606504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.606519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.606539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.606554 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.710059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.710648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.711067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.711305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.711536 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.815145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.815246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.815275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.815313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.815337 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.918823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.918886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.918898 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.918921 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.918937 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:28Z","lastTransitionTime":"2026-02-19T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.976700 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.976806 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:28 crc kubenswrapper[4861]: I0219 13:10:28.976692 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.976971 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.977193 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:28 crc kubenswrapper[4861]: E0219 13:10:28.977464 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.022871 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.022950 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.022970 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.023000 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.023019 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.085283 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:29:08.878401909 +0000 UTC Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.126455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.126529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.126550 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.126582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.126603 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.228454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.228491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.228501 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.228517 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.228527 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.330865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.330944 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.330960 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.330981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.330997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.434584 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.434625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.434634 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.434648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.434658 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.472319 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.472384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.472397 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.472441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.472457 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.488347 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.492999 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.493050 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.493061 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.493081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.493094 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.509507 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.545333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.545502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.545522 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.545547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.545570 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.565293 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.572473 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.572711 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.572777 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.572850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.572932 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.584352 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.584544 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.584650 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:31.584628759 +0000 UTC m=+46.245731987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.588851 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.593187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.593299 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.593373 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.593467 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.593539 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.607146 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.607378 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.609445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.609479 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.609494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.609513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.609527 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.713544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.713611 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.713630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.713656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.713673 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.816996 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.817039 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.817052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.817068 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.817079 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.920533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.920727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.920755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.920831 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.920854 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:29Z","lastTransitionTime":"2026-02-19T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:29 crc kubenswrapper[4861]: I0219 13:10:29.976765 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:29 crc kubenswrapper[4861]: E0219 13:10:29.977029 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.024367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.024460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.024477 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.024507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.024524 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.085875 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:40:58.403324018 +0000 UTC Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.128758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.128821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.128839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.128867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.128887 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.232552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.232616 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.232630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.232659 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.232679 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.336526 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.336596 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.336616 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.336647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.336671 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.440408 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.440523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.440544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.440578 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.440601 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.543916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.543953 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.543963 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.543978 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.543986 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.647646 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.648050 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.648247 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.648506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.648706 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.751915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.751975 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.751985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.752001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.752009 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.854781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.854859 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.854897 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.854922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.854939 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.958602 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.958785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.958813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.958839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.958860 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:30Z","lastTransitionTime":"2026-02-19T13:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.976150 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.976202 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:30 crc kubenswrapper[4861]: I0219 13:10:30.976244 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:30 crc kubenswrapper[4861]: E0219 13:10:30.976306 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:30 crc kubenswrapper[4861]: E0219 13:10:30.976486 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:30 crc kubenswrapper[4861]: E0219 13:10:30.976693 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.062160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.062257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.062273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.062320 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.062334 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.086666 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:40:22.971375173 +0000 UTC Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.165530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.165604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.165624 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.165653 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.165675 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.270959 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.271040 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.271063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.271090 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.271117 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.374599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.374645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.374654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.374669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.374679 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.477882 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.477989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.478007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.478039 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.478057 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.581310 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.581396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.581409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.581451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.581466 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.613193 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:31 crc kubenswrapper[4861]: E0219 13:10:31.613497 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:31 crc kubenswrapper[4861]: E0219 13:10:31.613617 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:35.613590254 +0000 UTC m=+50.274693482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.684753 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.684832 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.684851 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.684883 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.684910 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.789775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.789856 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.789875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.789906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.789929 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.894298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.894364 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.894389 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.894457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.894479 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.976317 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:31 crc kubenswrapper[4861]: E0219 13:10:31.976556 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.997687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.997764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.997786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.997820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:31 crc kubenswrapper[4861]: I0219 13:10:31.997841 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:31Z","lastTransitionTime":"2026-02-19T13:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.088680 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:51:09.290187912 +0000 UTC Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.101772 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.101856 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.101883 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.101916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.101942 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.204228 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.204264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.204272 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.204286 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.204294 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.307666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.307723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.307733 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.307748 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.307758 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.411370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.411438 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.411451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.411472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.411485 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.514183 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.514260 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.514283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.514795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.515055 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.618797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.618913 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.618936 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.618964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.618985 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.721689 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.721755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.721780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.721817 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.721839 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.825546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.825650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.825681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.825721 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.825749 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.928825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.928890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.928915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.928943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.928967 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:32Z","lastTransitionTime":"2026-02-19T13:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.976263 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:32 crc kubenswrapper[4861]: E0219 13:10:32.976487 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.976601 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:32 crc kubenswrapper[4861]: I0219 13:10:32.976621 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:32 crc kubenswrapper[4861]: E0219 13:10:32.976805 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:32 crc kubenswrapper[4861]: E0219 13:10:32.977184 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.032493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.032563 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.032581 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.032609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.032631 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.089700 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:26:07.769728654 +0000 UTC Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.136110 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.136192 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.136211 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.136240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.136260 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.238295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.238345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.238354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.238366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.238377 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.343867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.344219 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.344318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.344440 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.344549 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.448695 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.448762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.448783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.448809 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.448826 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.551754 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.551806 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.551817 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.551835 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.551848 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.655015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.655079 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.655101 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.655131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.655152 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.759059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.759109 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.759122 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.759139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.759149 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.862890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.862939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.862949 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.862972 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.862990 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.967288 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.967357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.967378 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.967410 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.967454 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:33Z","lastTransitionTime":"2026-02-19T13:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:33 crc kubenswrapper[4861]: I0219 13:10:33.976728 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:33 crc kubenswrapper[4861]: E0219 13:10:33.976856 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.069926 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.070006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.070025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.070048 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.070064 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.090198 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:52:17.889002259 +0000 UTC Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.173291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.173374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.173399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.173471 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.173492 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.276562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.276624 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.276638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.276661 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.276681 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.380518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.380615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.380638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.380669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.380697 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.484379 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.484478 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.484516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.484554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.484589 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.588550 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.588613 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.588637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.588671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.588695 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.692821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.692886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.692911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.692945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.692968 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.795912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.796086 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.796114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.796186 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.796216 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.899140 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.899572 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.899741 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.899911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.900094 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:34Z","lastTransitionTime":"2026-02-19T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.976936 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.977069 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:34 crc kubenswrapper[4861]: E0219 13:10:34.977121 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:34 crc kubenswrapper[4861]: E0219 13:10:34.977245 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:34 crc kubenswrapper[4861]: I0219 13:10:34.976936 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:34 crc kubenswrapper[4861]: E0219 13:10:34.977376 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.003805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.003878 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.003895 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.003922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.003947 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.091246 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:17:31.56697865 +0000 UTC Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.106447 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.106483 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.106491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.106505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.106517 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.209875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.209925 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.209939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.209957 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.209970 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.312715 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.312764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.312779 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.312798 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.312811 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.416320 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.416412 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.416467 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.416503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.416526 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.519048 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.519113 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.519131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.519154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.519171 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.622389 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.622488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.622507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.622530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.622546 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.666508 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:35 crc kubenswrapper[4861]: E0219 13:10:35.666758 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:35 crc kubenswrapper[4861]: E0219 13:10:35.666864 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:43.666836056 +0000 UTC m=+58.327939324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.726077 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.726156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.726181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.726214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.726239 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.829846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.829957 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.829978 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.830013 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.830035 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.933834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.933918 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.933943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.933977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.933999 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:35Z","lastTransitionTime":"2026-02-19T13:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.976388 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:35 crc kubenswrapper[4861]: E0219 13:10:35.976578 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:35 crc kubenswrapper[4861]: I0219 13:10:35.992811 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:35Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.007599 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.025665 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.037590 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.037667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.037687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.037716 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.037737 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.045069 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.077179 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.091713 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:04:30.726062593 +0000 UTC Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.093481 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.114760 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.130070 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.141119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.141171 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.141184 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.141206 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.141220 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.148780 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.164893 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.181514 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.200035 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.215759 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.229807 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.243259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.243305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.243315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.243333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.243345 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.252653 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.272376 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:36Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.346936 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.347018 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.347031 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.347045 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.347054 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.450617 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.450657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.450666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.450680 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.450690 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.553791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.553841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.553857 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.553879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.553897 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.659070 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.659115 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.659127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.659145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.659156 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.761791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.761851 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.761868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.761892 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.761909 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.865951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.866023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.866042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.866068 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.866086 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.969941 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.970001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.970013 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.970031 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.970045 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:36Z","lastTransitionTime":"2026-02-19T13:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.976625 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.976742 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:36 crc kubenswrapper[4861]: I0219 13:10:36.976802 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:36 crc kubenswrapper[4861]: E0219 13:10:36.977405 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:36 crc kubenswrapper[4861]: E0219 13:10:36.977527 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:36 crc kubenswrapper[4861]: E0219 13:10:36.977621 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.073579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.073640 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.073658 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.073682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.073700 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.083075 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.084346 4861 scope.go:117] "RemoveContainer" containerID="a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.092253 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:05:33.542561554 +0000 UTC Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.177083 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.177339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.177351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.177365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.177375 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.279702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.279732 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.279742 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.279758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.279769 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.301742 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/1.log" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.303947 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.304456 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.318257 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.329405 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.341618 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.360688 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.371212 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.382503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.382542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.382554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.382570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.382582 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.382958 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.398273 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.412794 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.428096 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.438389 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.455257 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.466169 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.485401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.485448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.485458 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.485475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.485485 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.486644 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.501960 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.524742 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.545062 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.587704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.587749 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.587759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.587774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.587784 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.614128 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.623745 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.629726 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.644669 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.656242 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.669987 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.684267 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.689775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.689839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.689853 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.689875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.689891 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.699045 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.709371 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.719203 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.735150 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.749021 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.760892 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.779248 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.790257 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.791636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.791663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.791671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.791685 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.791694 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.802268 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.815317 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.825322 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:37Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.894035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.894076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.894085 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.894102 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.894111 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.976903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:37 crc kubenswrapper[4861]: E0219 13:10:37.977083 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.996716 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.996791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.996812 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.997335 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:37 crc kubenswrapper[4861]: I0219 13:10:37.997390 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:37Z","lastTransitionTime":"2026-02-19T13:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.093120 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:18:46.011409741 +0000 UTC Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.100415 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.100479 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.100495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.100516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.100534 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.203383 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.203483 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.203515 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.203542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.203560 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.307589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.307651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.307669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.307699 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.307721 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.309905 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/2.log" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.311552 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/1.log" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.315007 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863" exitCode=1 Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.315047 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.315132 4861 scope.go:117] "RemoveContainer" containerID="a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.315836 4861 scope.go:117] "RemoveContainer" containerID="27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863" Feb 19 13:10:38 crc kubenswrapper[4861]: E0219 13:10:38.316077 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.354544 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a00fa6ac2bc4e18110ceede15facae238f85f5300a7da75e8296806a73007d74\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"message\\\":\\\"ions generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:10:24.984404 6313 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.373144 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.394232 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.410929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.410967 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.410984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.411005 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.411022 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.411952 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.429497 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.446994 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.461299 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.480936 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.495204 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.512601 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.515464 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.515549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.515571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.515606 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.515627 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.529089 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.543524 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.555283 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.570868 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.586020 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.602052 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.617044 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:38Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.619073 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.619179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.619207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.619240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.619267 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.722028 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.722103 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.722112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.722127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.722137 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.825762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.825836 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.825860 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.825890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.825914 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.928021 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.928084 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.928102 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.928125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.928140 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:38Z","lastTransitionTime":"2026-02-19T13:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.976504 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.976654 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:38 crc kubenswrapper[4861]: E0219 13:10:38.976770 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:38 crc kubenswrapper[4861]: I0219 13:10:38.976813 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:38 crc kubenswrapper[4861]: E0219 13:10:38.976884 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:38 crc kubenswrapper[4861]: E0219 13:10:38.976949 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.030093 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.030135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.030147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.030165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.030178 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.093509 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:52:45.273137957 +0000 UTC Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.133488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.133536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.133552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.133568 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.133580 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.236614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.236658 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.236671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.236689 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.236703 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.319921 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/2.log" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.323711 4861 scope.go:117] "RemoveContainer" containerID="27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.323997 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.338945 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.340385 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.340454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.340469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.340485 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.340497 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.351142 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.362738 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.378306 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.394919 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.410741 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.430098 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.442837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.442900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.442915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.442934 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.442946 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.462464 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.480502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.500064 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.515101 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.529768 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.545660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.545700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.545709 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.545723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.545732 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.552435 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.566054 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.581112 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.626047 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.647605 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.647621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.647628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.647639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.647648 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.662968 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.670662 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.670694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.670705 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.670722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.670737 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.675268 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.687890 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.690605 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.692401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.692568 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.692651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.692731 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.692813 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.707478 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.709788 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.714223 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.714265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.714274 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.714289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.714302 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.726796 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.731051 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.735635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.735674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.735684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.735699 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.735713 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.740154 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.753858 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.754689 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.759182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.759237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.759253 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.759276 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.759291 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.774538 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.778121 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.778788 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.780597 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.780628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.780637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.780649 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.780668 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.795111 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.811563 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.832986 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.845346 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.858721 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.869197 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.879708 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.883939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.883977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.883990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.884006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.884018 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.897501 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.906828 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.922650 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.938360 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.976892 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:39 crc kubenswrapper[4861]: E0219 13:10:39.977113 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.985729 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.985775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.985789 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.985808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:39 crc kubenswrapper[4861]: I0219 13:10:39.985823 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:39Z","lastTransitionTime":"2026-02-19T13:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.088939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.088971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.088980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.088992 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.089001 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.094482 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:12:05.030092674 +0000 UTC Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.191699 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.191745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.191762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.191786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.191802 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.295464 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.295814 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.295960 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.296112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.296261 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.399094 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.399156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.399181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.399210 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.399232 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.502120 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.502182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.502193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.502208 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.502219 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.605210 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.605591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.605784 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.605989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.606284 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.709533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.709574 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.709610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.709627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.709638 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.812874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.812918 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.812934 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.812957 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.812974 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.916116 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.916403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.916560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.916687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.916813 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:40Z","lastTransitionTime":"2026-02-19T13:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.926962 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:10:40 crc kubenswrapper[4861]: E0219 13:10:40.927269 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:11:12.927244134 +0000 UTC m=+87.588347402 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.976190 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.976216 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:40 crc kubenswrapper[4861]: E0219 13:10:40.976366 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:40 crc kubenswrapper[4861]: E0219 13:10:40.976496 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:40 crc kubenswrapper[4861]: I0219 13:10:40.976802 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:40 crc kubenswrapper[4861]: E0219 13:10:40.977059 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.019824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.019863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.019873 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.019889 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.019901 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.028394 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.028452 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.028472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.028492 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028595 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028636 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:11:13.028624036 +0000 UTC m=+87.689727264 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028635 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028671 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028685 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028738 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:11:13.028720099 +0000 UTC m=+87.689823327 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028771 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028783 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028792 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028815 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:11:13.028808631 +0000 UTC m=+87.689911849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028840 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.028857 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:11:13.028852133 +0000 UTC m=+87.689955361 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.095286 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:01:19.220611943 +0000 UTC Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.122537 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.122583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.122602 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.122625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.122641 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.225448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.225484 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.225492 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.225513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.225522 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.328213 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.328254 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.328265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.328280 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.328293 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.431528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.431571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.431582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.431598 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.431609 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.533736 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.533808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.533825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.533850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.533866 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.637196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.637230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.637241 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.637257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.637268 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.739485 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.739534 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.739547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.739566 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.739583 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.842206 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.842284 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.842310 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.842339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.842360 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.945122 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.945197 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.945234 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.945270 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.945293 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:41Z","lastTransitionTime":"2026-02-19T13:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:41 crc kubenswrapper[4861]: I0219 13:10:41.977092 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:41 crc kubenswrapper[4861]: E0219 13:10:41.977338 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.049311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.049378 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.049622 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.049653 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.049674 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.095525 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:04:33.659252367 +0000 UTC Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.152942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.153020 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.153047 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.153072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.153090 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.256129 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.256184 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.256196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.256216 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.256229 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.359190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.359261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.359278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.359297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.359310 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.463724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.463805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.463828 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.463870 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.463895 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.566527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.566580 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.566620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.566650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.566671 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.671158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.671230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.671251 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.671283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.671303 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.775520 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.775604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.775625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.775656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.775680 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.879344 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.879394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.879403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.879440 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.879451 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.976963 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.977009 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:42 crc kubenswrapper[4861]: E0219 13:10:42.977106 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.977022 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:42 crc kubenswrapper[4861]: E0219 13:10:42.977298 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:42 crc kubenswrapper[4861]: E0219 13:10:42.977376 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.982215 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.982279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.982297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.982328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:42 crc kubenswrapper[4861]: I0219 13:10:42.982349 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:42Z","lastTransitionTime":"2026-02-19T13:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.085491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.085533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.085544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.085569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.085582 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.095829 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:17:56.601761094 +0000 UTC Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.189371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.189457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.189471 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.189490 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.189503 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.292863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.292958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.292976 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.293003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.293023 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.396484 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.396734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.396756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.396781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.396799 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.499472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.499540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.499562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.499591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.499607 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.602006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.602058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.602069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.602086 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.602098 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.704961 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.705035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.705045 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.705060 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.705069 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.767954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:43 crc kubenswrapper[4861]: E0219 13:10:43.768190 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:43 crc kubenswrapper[4861]: E0219 13:10:43.768516 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:10:59.768390453 +0000 UTC m=+74.429493711 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.807570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.807625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.807634 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.807646 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.807656 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.911120 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.911177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.911188 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.911209 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.911222 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:43Z","lastTransitionTime":"2026-02-19T13:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:43 crc kubenswrapper[4861]: I0219 13:10:43.976930 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:43 crc kubenswrapper[4861]: E0219 13:10:43.977113 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.014139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.014210 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.014227 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.014252 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.014273 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.096334 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:44:02.398203147 +0000 UTC Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.116132 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.116175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.116187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.116202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.116211 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.218783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.218830 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.218844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.218862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.218875 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.321836 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.321878 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.321887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.321902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.321912 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.423727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.423766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.423777 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.423794 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.423806 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.527125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.527164 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.527173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.527203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.527213 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.630415 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.630511 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.630528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.630552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.630570 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.734119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.734143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.734151 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.734163 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.734171 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.836044 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.836077 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.836086 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.836098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.836107 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.938638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.938668 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.938676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.938690 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.938699 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:44Z","lastTransitionTime":"2026-02-19T13:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.976137 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.976197 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:44 crc kubenswrapper[4861]: I0219 13:10:44.976208 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:44 crc kubenswrapper[4861]: E0219 13:10:44.976270 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:44 crc kubenswrapper[4861]: E0219 13:10:44.976398 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:44 crc kubenswrapper[4861]: E0219 13:10:44.976487 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.041136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.041181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.041193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.041211 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.041223 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.097557 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:14:39.099443266 +0000 UTC Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.143632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.143690 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.143708 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.143731 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.143749 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.247663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.247750 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.247775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.247808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.247834 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.349751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.349797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.349808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.349824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.349836 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.453315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.453394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.453412 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.453465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.453487 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.556853 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.556898 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.556907 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.556922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.556931 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.660547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.660601 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.660616 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.660638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.660653 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.763827 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.763895 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.763911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.763939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.763958 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.866981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.867037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.867049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.867069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.867082 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.971465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.972103 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.972127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.972157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.972181 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:45Z","lastTransitionTime":"2026-02-19T13:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.976398 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:45 crc kubenswrapper[4861]: E0219 13:10:45.976638 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:45 crc kubenswrapper[4861]: I0219 13:10:45.997727 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:45Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.010530 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.026540 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.040956 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.056755 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.070949 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.075214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.075290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.075316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.075352 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.075374 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.090927 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.098724 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:25:22.088402475 +0000 UTC Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.111846 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.131591 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.150007 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.163166 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.176681 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.179054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.179111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.179130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.179157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.179182 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.190913 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.200868 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.216015 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.227844 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.239069 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:46Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.281391 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.281505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.281530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.281561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.281585 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.385144 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.385214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.385235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.385263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.385281 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.487919 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.488003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.488029 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.488061 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.488081 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.590730 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.590788 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.590806 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.590831 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.590849 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.693724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.693764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.693773 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.693786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.693796 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.795725 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.795770 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.795782 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.795798 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.795810 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.899248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.899328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.899349 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.899379 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.899399 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:46Z","lastTransitionTime":"2026-02-19T13:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.976673 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.976695 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:46 crc kubenswrapper[4861]: E0219 13:10:46.976915 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:46 crc kubenswrapper[4861]: E0219 13:10:46.976961 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:46 crc kubenswrapper[4861]: I0219 13:10:46.976712 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:46 crc kubenswrapper[4861]: E0219 13:10:46.977033 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.002281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.002339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.002350 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.002371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.002383 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.099448 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:55:37.965927261 +0000 UTC Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.105345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.105399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.105456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.105486 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.105509 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.207592 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.207621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.207631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.207646 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.207655 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.310278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.310338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.310361 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.310407 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.310463 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.412365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.412585 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.412618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.412662 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.412684 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.515859 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.515923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.516046 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.516096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.516113 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.619337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.619454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.619474 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.619498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.619515 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.721971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.722009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.722023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.722041 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.722051 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.824892 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.824970 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.824993 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.825023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.825045 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.928456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.928511 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.928526 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.928544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.928557 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:47Z","lastTransitionTime":"2026-02-19T13:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:47 crc kubenswrapper[4861]: I0219 13:10:47.976590 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:47 crc kubenswrapper[4861]: E0219 13:10:47.976782 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.031708 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.031795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.031820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.031894 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.031920 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.100671 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:23:21.046825046 +0000 UTC Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.136204 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.136285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.136302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.136330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.136349 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.238646 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.238686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.238698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.238714 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.238726 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.340889 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.340959 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.340976 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.341002 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.341023 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.446159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.446232 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.446244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.446266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.446279 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.549213 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.549341 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.549354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.549370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.549382 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.652177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.652228 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.652242 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.652263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.652279 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.754475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.754519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.754530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.754546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.754558 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.857644 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.857704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.857715 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.857911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.857921 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.961450 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.961538 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.961553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.961573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.961586 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:48Z","lastTransitionTime":"2026-02-19T13:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.976152 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.976274 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:48 crc kubenswrapper[4861]: E0219 13:10:48.976481 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:48 crc kubenswrapper[4861]: I0219 13:10:48.976540 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:48 crc kubenswrapper[4861]: E0219 13:10:48.976594 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:48 crc kubenswrapper[4861]: E0219 13:10:48.977131 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.064751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.064799 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.064812 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.064832 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.064845 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.101984 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:22:42.712316182 +0000 UTC Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.168158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.168245 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.168272 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.168312 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.168339 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.271667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.271744 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.271783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.271827 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.271853 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.374403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.374476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.374487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.374500 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.374509 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.477384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.477436 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.477447 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.477481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.477492 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.580050 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.580099 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.580138 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.580218 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.580235 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.683281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.683341 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.683356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.683394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.683408 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.786017 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.786052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.786061 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.786074 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.786083 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.883091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.883136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.883145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.883161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.883169 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.898030 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.903165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.903192 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.903202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.903214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.903222 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.922549 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.925810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.925844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.925864 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.925880 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.925891 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.941811 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.945157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.945196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.945207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.945224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.945235 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.958590 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.961928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.961966 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.961975 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.961989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.962000 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.974026 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.974128 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.975548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.975601 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.975614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.975632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.975645 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:49Z","lastTransitionTime":"2026-02-19T13:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:49 crc kubenswrapper[4861]: I0219 13:10:49.975973 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:49 crc kubenswrapper[4861]: E0219 13:10:49.976063 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.078330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.078579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.078666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.078740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.078802 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.102470 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:25:06.090239627 +0000 UTC Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.180693 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.180775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.180787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.180807 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.180816 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.283723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.283767 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.283783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.283804 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.283817 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.385494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.385533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.385541 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.385555 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.385563 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.487455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.487496 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.487506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.487521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.487531 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.589879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.590158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.590241 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.590313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.590371 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.692607 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.692650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.692662 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.692678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.692689 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.794347 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.794394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.794407 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.794445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.794459 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.896766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.896808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.896822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.896839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.896851 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.976479 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.976558 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:50 crc kubenswrapper[4861]: E0219 13:10:50.976608 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:50 crc kubenswrapper[4861]: E0219 13:10:50.976730 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.976558 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:50 crc kubenswrapper[4861]: E0219 13:10:50.976828 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.998722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.998759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.998768 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.998783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:50 crc kubenswrapper[4861]: I0219 13:10:50.998794 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:50Z","lastTransitionTime":"2026-02-19T13:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.101201 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.101236 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.101248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.101264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.101275 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.102551 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:49:33.852440917 +0000 UTC Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.203271 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.203311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.203320 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.203334 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.203343 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.306054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.306088 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.306098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.306114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.306125 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.408137 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.408173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.408182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.408199 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.408209 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.510524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.510589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.510607 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.510630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.510648 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.613867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.613905 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.613915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.613929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.613939 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.716394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.716455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.716472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.716491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.716502 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.818984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.819063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.819090 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.819122 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.819148 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.921553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.921615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.921632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.921655 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.921674 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:51Z","lastTransitionTime":"2026-02-19T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:51 crc kubenswrapper[4861]: I0219 13:10:51.976025 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:51 crc kubenswrapper[4861]: E0219 13:10:51.976170 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.024080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.024111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.024119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.024131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.024142 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.102936 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:50:34.951767606 +0000 UTC Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.125680 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.125720 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.125739 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.125756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.125768 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.227580 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.227615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.227629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.227644 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.227654 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.329183 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.329223 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.329249 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.329264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.329274 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.431692 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.431731 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.431742 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.431760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.431773 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.533502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.533529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.533537 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.533550 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.533558 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.635290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.635323 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.635333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.635348 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.635358 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.737799 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.737834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.737842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.737854 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.737862 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.840336 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.840370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.840378 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.840391 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.840400 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.942834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.942866 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.942874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.942886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.942899 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:52Z","lastTransitionTime":"2026-02-19T13:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.976278 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:52 crc kubenswrapper[4861]: E0219 13:10:52.976435 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.976466 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.976524 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:52 crc kubenswrapper[4861]: E0219 13:10:52.976932 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:52 crc kubenswrapper[4861]: E0219 13:10:52.977050 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:52 crc kubenswrapper[4861]: I0219 13:10:52.977190 4861 scope.go:117] "RemoveContainer" containerID="27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863" Feb 19 13:10:52 crc kubenswrapper[4861]: E0219 13:10:52.977395 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.045367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.045409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.045437 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.045454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.045465 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.103291 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:37:03.581813216 +0000 UTC Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.147490 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.147527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.147538 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.147553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.147564 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.250495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.250559 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.250576 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.250599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.250616 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.355465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.355545 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.355565 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.355584 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.355594 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.459577 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.459623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.459633 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.459648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.459657 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.563358 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.563406 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.563436 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.563457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.563469 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.666637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.666697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.666709 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.666732 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.666747 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.769929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.769992 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.770009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.770034 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.770052 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.873302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.873354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.873364 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.873381 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.873394 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.975844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.975884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.975896 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.975911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.975922 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:53Z","lastTransitionTime":"2026-02-19T13:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:53 crc kubenswrapper[4861]: I0219 13:10:53.976158 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:53 crc kubenswrapper[4861]: E0219 13:10:53.976394 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.078255 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.078328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.078346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.078382 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.078414 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.103739 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:05:04.646383322 +0000 UTC Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.182134 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.182179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.182190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.182207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.182218 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.285173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.285218 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.285230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.285246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.285255 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.387605 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.387652 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.387662 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.387674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.387683 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.490697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.490739 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.490748 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.490762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.490771 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.592717 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.592745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.592753 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.592766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.592775 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.696080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.696113 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.696123 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.696136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.696145 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.798190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.798260 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.798278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.798306 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.798323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.900611 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.900687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.900704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.900728 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.900746 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:54Z","lastTransitionTime":"2026-02-19T13:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.976260 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.976308 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:54 crc kubenswrapper[4861]: I0219 13:10:54.976274 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:54 crc kubenswrapper[4861]: E0219 13:10:54.976482 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:54 crc kubenswrapper[4861]: E0219 13:10:54.976607 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:54 crc kubenswrapper[4861]: E0219 13:10:54.976728 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.002963 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.002996 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.003005 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.003020 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.003029 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.104224 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:41:43.488267364 +0000 UTC Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.105243 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.105306 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.105331 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.105361 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.105384 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.208023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.208067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.208078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.208096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.208107 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.309719 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.309780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.309797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.309821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.309838 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.411647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.411683 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.411691 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.411704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.411713 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.514263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.514305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.514318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.514334 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.514345 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.618720 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.618795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.618813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.618838 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.618855 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.721181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.721234 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.721245 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.721269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.721280 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.828229 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.828286 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.828296 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.828311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.828321 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.931370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.931410 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.931433 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.931448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.931456 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:55Z","lastTransitionTime":"2026-02-19T13:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.976771 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:55 crc kubenswrapper[4861]: E0219 13:10:55.977122 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.990717 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:55Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:55 crc kubenswrapper[4861]: I0219 13:10:55.991159 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.006259 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.030837 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.040732 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.040826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.040846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.040868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.040883 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.054980 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.092653 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.104310 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:14:53.081160996 +0000 UTC Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.112495 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.126513 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.143120 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.143161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.143170 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.143185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.143195 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.144489 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.156952 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.168678 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.179835 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.190786 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.208697 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.217467 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.228985 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.237743 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.245480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.245521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.245532 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.245549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.245562 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.248445 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:10:56Z is after 2025-08-24T17:21:41Z" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.347007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.347041 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.347051 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.347063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.347071 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.449725 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.449774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.449790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.449809 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.449819 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.551916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.551958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.551969 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.551985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.551997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.655066 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.655108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.655119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.655134 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.655149 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.756728 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.756775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.756785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.756799 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.756809 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.859136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.859175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.859185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.859200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.859209 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.961214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.961279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.961296 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.961321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.961340 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:56Z","lastTransitionTime":"2026-02-19T13:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.976017 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.976097 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:56 crc kubenswrapper[4861]: E0219 13:10:56.976147 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:56 crc kubenswrapper[4861]: I0219 13:10:56.976214 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:56 crc kubenswrapper[4861]: E0219 13:10:56.976470 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:56 crc kubenswrapper[4861]: E0219 13:10:56.976553 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.063752 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.063836 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.063847 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.063863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.063873 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.105050 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:20:35.438437077 +0000 UTC Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.165667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.165740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.165761 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.165783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.165800 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.267477 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.267519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.267529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.267542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.267550 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.370296 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.370363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.370387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.370417 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.370471 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.473014 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.473060 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.473071 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.473085 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.473097 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.575130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.575166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.575178 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.575192 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.575202 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.677351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.677387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.677395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.677409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.677433 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.779761 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.780018 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.780088 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.780155 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.780225 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.882780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.882817 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.882829 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.882842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.882851 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.977077 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:57 crc kubenswrapper[4861]: E0219 13:10:57.977230 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.985128 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.985226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.985323 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.985394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:57 crc kubenswrapper[4861]: I0219 13:10:57.985489 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:57Z","lastTransitionTime":"2026-02-19T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.088718 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.088767 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.088780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.088796 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.088810 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.105893 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:39:25.985752133 +0000 UTC Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.191404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.191460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.191471 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.191487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.191497 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.293929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.294187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.294333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.294412 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.294505 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.396538 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.396606 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.396619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.396635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.396651 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.499208 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.499268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.499283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.499299 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.499313 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.601786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.601830 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.601841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.601887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.601901 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.704871 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.705154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.705327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.705497 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.705628 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.807820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.807871 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.807887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.807906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.807918 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.910901 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.910935 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.910945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.910960 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.910968 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:58Z","lastTransitionTime":"2026-02-19T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.976706 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.976749 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:10:58 crc kubenswrapper[4861]: E0219 13:10:58.976852 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:10:58 crc kubenswrapper[4861]: I0219 13:10:58.976749 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:10:58 crc kubenswrapper[4861]: E0219 13:10:58.976991 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:10:58 crc kubenswrapper[4861]: E0219 13:10:58.976912 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.014395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.014459 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.014472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.014491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.014503 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.106494 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:57:19.84886903 +0000 UTC Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.117466 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.117504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.117512 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.117525 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.117534 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.220632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.220706 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.220725 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.220747 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.220764 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.323927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.323961 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.323969 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.323982 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.323993 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.426529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.427025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.427089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.427151 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.427216 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.530305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.530374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.530395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.530467 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.530492 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.633161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.633396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.633482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.633562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.633625 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.736265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.736293 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.736301 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.736316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.736326 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.770326 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:59 crc kubenswrapper[4861]: E0219 13:10:59.770479 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:59 crc kubenswrapper[4861]: E0219 13:10:59.770533 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:11:31.770517883 +0000 UTC m=+106.431621111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.838249 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.838289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.838298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.838314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.838323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.941208 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.941281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.941305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.941335 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.941360 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:10:59Z","lastTransitionTime":"2026-02-19T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:10:59 crc kubenswrapper[4861]: I0219 13:10:59.979554 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:10:59 crc kubenswrapper[4861]: E0219 13:10:59.979683 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.044172 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.044206 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.044217 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.044233 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.044242 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.107023 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:57:41.251745643 +0000 UTC Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.147268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.147312 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.147328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.147351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.147369 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.250275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.250306 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.250317 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.250332 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.250341 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.352631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.352659 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.352666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.352678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.352687 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.460170 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.460278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.460288 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.460301 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.460323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.461865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.461913 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.461924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.461943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.461958 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.478875 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:00Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.483734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.483785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.483826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.483844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.483855 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.499408 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:00Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.503835 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.503865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.503874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.503887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.503896 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.521845 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:00Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.527566 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.527623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.527635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.527650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.527658 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.546692 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:00Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.551221 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.551294 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.551305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.551339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.551350 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.569950 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:00Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.570138 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.575950 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.575991 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.576004 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.576039 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.576051 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.679520 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.679564 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.679572 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.679603 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.679616 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.782631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.782667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.782700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.782716 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.782726 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.885573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.885685 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.885697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.885714 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.885725 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.976827 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.977015 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.977045 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.977099 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.977207 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:00 crc kubenswrapper[4861]: E0219 13:11:00.977395 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.988698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.988734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.988747 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.988761 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:00 crc kubenswrapper[4861]: I0219 13:11:00.988769 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:00Z","lastTransitionTime":"2026-02-19T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.091840 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.091894 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.091902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.091916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.091941 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.108000 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:51:44.112377288 +0000 UTC Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.193973 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.194007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.194017 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.194029 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.194039 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.296439 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.296494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.296510 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.296527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.296538 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.398208 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.398238 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.398246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.398259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.398268 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.500748 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.500794 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.500805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.500822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.500834 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.603187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.603246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.603263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.603285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.603300 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.705190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.705235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.705247 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.705263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.705276 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.807623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.807665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.807681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.807697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.807707 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.910175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.910234 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.910246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.910266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.910281 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:01Z","lastTransitionTime":"2026-02-19T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:01 crc kubenswrapper[4861]: I0219 13:11:01.976804 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:01 crc kubenswrapper[4861]: E0219 13:11:01.976946 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.013119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.013164 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.013175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.013194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.013207 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.108618 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:32:42.898834827 +0000 UTC Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.115928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.115966 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.115978 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.115995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.116009 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.218755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.218795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.218807 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.218822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.218831 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.321337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.321389 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.321402 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.321429 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.321439 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.423357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.423396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.423452 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.423470 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.423483 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.526148 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.526194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.526210 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.526228 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.526242 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.628516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.628772 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.628782 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.628795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.628804 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.731208 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.731249 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.731259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.731277 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.731288 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.833840 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.833886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.833898 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.833915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.833927 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.936057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.936091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.936099 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.936111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.936120 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:02Z","lastTransitionTime":"2026-02-19T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.976492 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:02 crc kubenswrapper[4861]: E0219 13:11:02.976595 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.976766 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:02 crc kubenswrapper[4861]: E0219 13:11:02.976811 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:02 crc kubenswrapper[4861]: I0219 13:11:02.976907 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:02 crc kubenswrapper[4861]: E0219 13:11:02.976954 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.038811 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.038875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.038893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.038920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.038936 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.109215 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:27:47.315489598 +0000 UTC Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.141406 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.141456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.141467 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.141482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.141493 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.243887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.243944 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.243962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.243985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.244098 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.346560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.346599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.346609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.346626 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.346638 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.448908 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.448947 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.448955 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.448970 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.448981 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.470405 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/0.log" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.470491 4861 generic.go:334] "Generic (PLEG): container finished" podID="1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb" containerID="2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b" exitCode=1 Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.470526 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerDied","Data":"2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.470933 4861 scope.go:117] "RemoveContainer" containerID="2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.485749 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.497930 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.510353 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.523583 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.536199 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.550921 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.551161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.551176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.551194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.551207 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.554798 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.567337 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.580333 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.591144 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.601561 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.613292 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.621960 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.633269 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.647541 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.653452 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.653475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.653485 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.653500 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.653511 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.659548 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.669609 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.682550 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.691307 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:03Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.755694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.755734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.755744 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.755760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.755771 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.858319 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.858377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.858392 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.858411 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.858446 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.960466 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.960579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.960589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.960603 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.960612 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:03Z","lastTransitionTime":"2026-02-19T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:03 crc kubenswrapper[4861]: I0219 13:11:03.976921 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:03 crc kubenswrapper[4861]: E0219 13:11:03.977162 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.064391 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.064449 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.064460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.064475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.064489 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.110342 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:14:03.629782618 +0000 UTC Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.167127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.167171 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.167181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.167198 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.167209 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.269734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.269790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.269805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.269825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.269839 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.371943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.371995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.372003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.372019 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.372029 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.474918 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.474976 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.474993 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.475016 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.475032 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.475741 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/0.log" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.475809 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerStarted","Data":"4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.488133 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.502069 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.513772 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.530156 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.543004 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.555639 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.565780 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.576147 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.577453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.577483 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.577496 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.577512 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.577524 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.585522 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.601539 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.612622 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.627256 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.639768 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.651499 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.664587 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.677306 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.679603 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.679634 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.679644 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.679660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.679671 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.698106 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.708208 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:04Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.781664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.781709 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.781720 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.781737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.781749 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.883289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.883326 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.883338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.883354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.883365 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.976713 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.976732 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:04 crc kubenswrapper[4861]: E0219 13:11:04.976840 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.976728 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:04 crc kubenswrapper[4861]: E0219 13:11:04.977074 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:04 crc kubenswrapper[4861]: E0219 13:11:04.977266 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.986015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.986064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.986084 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.986114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:04 crc kubenswrapper[4861]: I0219 13:11:04.986137 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:04Z","lastTransitionTime":"2026-02-19T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.088671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.088717 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.088727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.088742 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.088754 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.111075 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:21:40.876602249 +0000 UTC Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.191108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.191161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.191177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.191200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.191217 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.293823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.293860 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.293872 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.293915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.293929 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.396046 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.396104 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.396114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.396129 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.396138 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.502332 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.502393 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.502409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.502457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.502484 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.605262 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.605353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.605377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.605406 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.605475 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.708864 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.708908 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.708919 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.708935 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.708948 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.810953 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.811008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.811019 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.811036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.811045 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.914095 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.914156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.914189 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.914219 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.914239 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:05Z","lastTransitionTime":"2026-02-19T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.977039 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:05 crc kubenswrapper[4861]: E0219 13:11:05.977189 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.978677 4861 scope.go:117] "RemoveContainer" containerID="27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863" Feb 19 13:11:05 crc kubenswrapper[4861]: I0219 13:11:05.989894 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:05Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.002714 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:05Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.012950 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.016648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.016702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.016719 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.016743 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.016771 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.028610 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.042988 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.055674 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.074173 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.090445 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.102700 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.111187 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:46:55.316491019 +0000 UTC Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.119333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.119404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.119451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.119475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.119492 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.120496 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.144442 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.156499 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.169977 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.186502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.198870 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.213516 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.224118 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.225262 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.225310 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.225322 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.225339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.225351 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.239899 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.327377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.327410 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.327433 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.327446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.327455 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.429130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.429181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.429193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.429228 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.429241 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.485287 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/2.log" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.488449 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.489249 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.502516 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.518797 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.531201 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.531226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.531233 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.531245 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.531253 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.569568 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.590452 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.605404 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.622543 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.633812 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.633855 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.633868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.633885 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.633895 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.635173 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.648782 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.657493 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.667015 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.679958 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.690676 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.705313 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.718778 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.729155 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.735791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.735828 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.735838 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.735852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.735863 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.738979 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.751134 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.759251 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:06Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.838702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.838735 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.838744 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.838759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.838767 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.940602 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.940634 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.940642 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.940654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.940663 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:06Z","lastTransitionTime":"2026-02-19T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.976364 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:06 crc kubenswrapper[4861]: E0219 13:11:06.976560 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.976959 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:06 crc kubenswrapper[4861]: E0219 13:11:06.977280 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.977414 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:06 crc kubenswrapper[4861]: E0219 13:11:06.977609 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:06 crc kubenswrapper[4861]: I0219 13:11:06.993086 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.043504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.043590 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.043612 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.043635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.043653 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.111523 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:31:45.577827408 +0000 UTC Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.146322 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.146401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.146522 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.146562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.146585 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.251412 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.251486 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.251503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.251562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.251581 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.354436 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.354508 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.354518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.354531 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.354542 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.456514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.456551 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.456560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.456575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.456586 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.493641 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/3.log" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.495627 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/2.log" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.500345 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" exitCode=1 Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.500467 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.500836 4861 scope.go:117] "RemoveContainer" containerID="27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.506079 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:11:07 crc kubenswrapper[4861]: E0219 13:11:07.507183 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.524874 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.538153 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.550379 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.559847 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.559910 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.559928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.559951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.559968 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.563660 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.581767 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d2156ebdf7cceb1e9cf743d86caecc86556efd63727f2291a294846a6c2863\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"message\\\":\\\"e-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"63b1440a-0908-4cab-8799-012fa1cf0b07\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0219 13:10:37.866399 6541 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:06Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:11:06.956931 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956940 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956947 6946 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0219 13:11:06.956972 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0219 13:11:06.956978 6946 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956988 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.956994 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957000 6946 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ffskh in node crc\\\\nI0219 13:11:06.957005 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ffskh after 0 failed attempt(s)\\\\nI0219 13:11:06.957010 6946 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957025 6946 obj_retry.go:303] \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.597623 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.614485 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.629176 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.642073 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.656838 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.662311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.662347 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.662359 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.662378 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.662390 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.667905 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.681094 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.693360 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.706773 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.726915 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.756566 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aeeeace-d76b-4946-b3f7-6949da150692\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d781bfecbeb39261bba4fc9d1b03f8f1cbed369f7853aaff900ef7ca37a00b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badfa69a0b27c1fd766d08c5c18d316f1adadb6f12c5f2a80915ab8b37e46cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ec651c7187f3afedf843d7538ba88e9fc4aa1619fe11e3a4e61cc570c91443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a48180980ed67d1755a75c9dbac5b58faa20142beff57ef5e9a53881606129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5abadbe7456206a92ef9f0ab4c940622079e3d05f062cfeebea199db607be49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.765072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.765116 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.765128 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.765143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.765153 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.777502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.791405 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.805445 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:07Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.868296 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.868356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.868374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.868397 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.868441 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.971498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.971546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.971557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.971578 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.971594 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:07Z","lastTransitionTime":"2026-02-19T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:07 crc kubenswrapper[4861]: I0219 13:11:07.977056 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:07 crc kubenswrapper[4861]: E0219 13:11:07.977247 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.074797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.075206 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.075374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.075540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.075696 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.112767 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:18:41.127810185 +0000 UTC Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.178400 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.178486 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.178504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.178529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.178546 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.281194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.281266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.281290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.281322 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.281345 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.383820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.383865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.383876 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.383894 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.383906 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.486395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.486502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.486521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.486549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.486567 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.507942 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/3.log" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.512672 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:11:08 crc kubenswrapper[4861]: E0219 13:11:08.512934 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.535855 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.550938 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.571544 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.589181 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.589948 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.589984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.589995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.590012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.590022 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.600611 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.614838 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.639248 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aeeeace-d76b-4946-b3f7-6949da150692\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d781bfecbeb39261bba4fc9d1b03f8f1cbed369f7853aaff900ef7ca37a00b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badfa69a0b27c1fd766d08c5c18d316f1adadb6f12c5f2a80915ab8b37e46cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ec651c7187f3afedf843d7538ba88e9fc4aa1619fe11e3a4e61cc570c91443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a48180980ed67d1755a75c9dbac5b58faa20142beff57ef5e9a53881606129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5abadbe7456206a92ef9f0ab4c940622079e3d05f062cfeebea199db607be49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.656165 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.670990 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.689144 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.692399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.692586 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.692635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.692651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.692663 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.700725 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.718563 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.734130 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.751167 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.774102 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:06Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:11:06.956931 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956940 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956947 6946 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0219 13:11:06.956972 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0219 13:11:06.956978 6946 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956988 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.956994 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957000 6946 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ffskh in node crc\\\\nI0219 13:11:06.957005 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ffskh after 0 failed attempt(s)\\\\nI0219 13:11:06.957010 6946 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957025 6946 obj_retry.go:303] \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:11:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.788000 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.794926 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.794971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.794987 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.795009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.795025 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.804251 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.817169 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.832836 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:08Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.897582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.897607 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.897615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.897643 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.897652 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:08Z","lastTransitionTime":"2026-02-19T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.976680 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.976709 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:08 crc kubenswrapper[4861]: I0219 13:11:08.976745 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:08 crc kubenswrapper[4861]: E0219 13:11:08.976808 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:08 crc kubenswrapper[4861]: E0219 13:11:08.976939 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:08 crc kubenswrapper[4861]: E0219 13:11:08.977028 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.001072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.001154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.001176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.001207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.001226 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.104448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.104503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.104528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.104552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.104568 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.113647 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:35:14.365310772 +0000 UTC Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.207273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.207316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.207329 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.207353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.207375 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.310260 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.310333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.310356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.310387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.310410 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.412671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.412703 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.412712 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.412724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.412733 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.514456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.514502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.514518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.514536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.514547 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.616394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.616453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.616468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.616484 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.616493 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.718982 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.719019 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.719029 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.719043 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.719051 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.820865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.820915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.820924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.820938 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.820947 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.923166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.923217 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.923226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.923242 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.923251 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:09Z","lastTransitionTime":"2026-02-19T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:09 crc kubenswrapper[4861]: I0219 13:11:09.976923 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:09 crc kubenswrapper[4861]: E0219 13:11:09.977162 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.025390 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.025449 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.025481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.025496 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.025505 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.114768 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:19:36.175634603 +0000 UTC Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.127569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.127607 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.127615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.127628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.127637 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.230560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.230601 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.230610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.230629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.230638 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.333226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.333275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.333293 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.333311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.333323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.434995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.435055 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.435076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.435096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.435113 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.537844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.537888 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.537899 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.537914 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.537926 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.640008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.640057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.640071 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.640095 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.640111 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.702163 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.702205 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.702217 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.702232 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.702244 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.718629 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.722503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.722536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.722548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.722564 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.722575 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.735386 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.738989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.739030 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.739038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.739052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.739062 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.751906 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.755040 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.755093 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.755110 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.755145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.755161 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.766544 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.770187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.770269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.770278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.770302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.770311 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.783563 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:10Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.783858 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.785556 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.785595 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.785610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.785629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.785644 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.887688 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.887725 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.887737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.887753 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.887766 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.976705 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.976753 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.976813 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.976827 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.976884 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:10 crc kubenswrapper[4861]: E0219 13:11:10.976962 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.990470 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.990542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.990557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.990574 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:10 crc kubenswrapper[4861]: I0219 13:11:10.990587 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:10Z","lastTransitionTime":"2026-02-19T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.093317 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.093591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.093630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.093648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.093657 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.115881 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:05:26.858833149 +0000 UTC Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.196544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.196589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.196605 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.196625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.196640 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.299091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.299121 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.299131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.299144 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.299153 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.401540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.401574 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.401583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.401595 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.401603 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.503739 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.503780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.503792 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.503810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.503823 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.605714 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.605762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.605774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.605791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.605801 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.708457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.708494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.708505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.708523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.708531 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.810456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.810496 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.810509 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.810523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.810534 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.913409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.913463 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.913472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.913490 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.913501 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:11Z","lastTransitionTime":"2026-02-19T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:11 crc kubenswrapper[4861]: I0219 13:11:11.976332 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:11 crc kubenswrapper[4861]: E0219 13:11:11.976533 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.026951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.027014 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.027023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.027038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.027047 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.116522 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:38:41.496427901 +0000 UTC Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.129392 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.129469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.129486 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.129505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.129517 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.231996 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.232035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.232045 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.232061 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.232073 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.335081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.335121 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.335129 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.335145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.335156 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.438937 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.439015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.439033 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.439056 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.439073 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.541387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.541441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.541469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.541481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.541489 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.643599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.643648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.643658 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.643670 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.643679 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.745352 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.745401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.745451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.745487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.745591 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.847135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.847170 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.847182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.847196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.847206 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.949438 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.949481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.949494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.949511 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.949524 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:12Z","lastTransitionTime":"2026-02-19T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.973138 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:11:12 crc kubenswrapper[4861]: E0219 13:11:12.973511 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.973484399 +0000 UTC m=+151.634587667 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.976167 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.976167 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:12 crc kubenswrapper[4861]: E0219 13:11:12.976373 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:12 crc kubenswrapper[4861]: E0219 13:11:12.976295 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:12 crc kubenswrapper[4861]: I0219 13:11:12.976190 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:12 crc kubenswrapper[4861]: E0219 13:11:12.976473 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.052032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.052108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.052130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.052154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.052171 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.074047 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.074090 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.074107 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.074129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074267 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074326 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.074308474 +0000 UTC m=+151.735411702 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074351 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074374 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074634 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074657 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074669 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074598 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.074562802 +0000 UTC m=+151.735666080 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074704 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074734 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074836 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.074719147 +0000 UTC m=+151.735822465 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.074866 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.074855281 +0000 UTC m=+151.735958519 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.117034 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:05:19.609383858 +0000 UTC Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.154298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.154345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.154359 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.154378 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.154388 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.256850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.256920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.256942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.256975 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.257000 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.359766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.359807 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.359816 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.359832 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.359843 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.462519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.462559 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.462568 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.462582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.462591 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.567188 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.567240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.567255 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.567275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.567289 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.669007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.669040 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.669049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.669060 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.669069 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.771468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.771521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.771534 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.771552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.771563 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.873971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.874025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.874041 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.874063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.874082 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.976204 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:13 crc kubenswrapper[4861]: E0219 13:11:13.976344 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.976618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.976673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.976682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.976698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:13 crc kubenswrapper[4861]: I0219 13:11:13.976709 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:13Z","lastTransitionTime":"2026-02-19T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.079336 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.079382 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.079393 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.079411 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.079463 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.117798 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:11:21.714402725 +0000 UTC Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.184391 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.184453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.184465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.184480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.184489 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.287108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.287147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.287158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.287174 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.287185 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.388910 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.388942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.388952 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.388965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.388974 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.491318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.491351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.491362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.491377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.491388 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.593135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.593171 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.593179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.593219 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.593231 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.695161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.695198 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.695207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.695221 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.695230 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.797826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.797857 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.797866 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.797880 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.797890 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.899941 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.899985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.899993 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.900006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.900014 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:14Z","lastTransitionTime":"2026-02-19T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.976865 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.977131 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:14 crc kubenswrapper[4861]: I0219 13:11:14.977242 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:14 crc kubenswrapper[4861]: E0219 13:11:14.977305 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:14 crc kubenswrapper[4861]: E0219 13:11:14.977588 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:14 crc kubenswrapper[4861]: E0219 13:11:14.977663 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.002946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.003015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.003032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.003055 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.003070 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.105984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.106042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.106058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.106083 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.106101 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.118269 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:34:01.95062989 +0000 UTC Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.209275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.209381 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.209407 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.209485 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.209554 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.312956 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.313038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.313052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.313069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.313082 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.416402 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.416480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.416493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.416512 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.416526 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.520476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.520536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.520548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.520573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.520587 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.623019 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.623063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.623073 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.623091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.623112 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.725521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.725556 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.725566 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.725583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.725594 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.827893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.827928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.827939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.827955 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.827964 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.930497 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.930536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.930549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.930566 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.930576 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:15Z","lastTransitionTime":"2026-02-19T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.976917 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:15 crc kubenswrapper[4861]: E0219 13:11:15.977053 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:15 crc kubenswrapper[4861]: I0219 13:11:15.990726 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.001022 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:15Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.014369 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.025915 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.033242 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.033298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.033315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.033337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.033354 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.044783 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:06Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:11:06.956931 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956940 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956947 6946 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0219 13:11:06.956972 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0219 13:11:06.956978 6946 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956988 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.956994 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957000 6946 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ffskh in node crc\\\\nI0219 13:11:06.957005 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ffskh after 0 failed attempt(s)\\\\nI0219 13:11:06.957010 6946 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957025 6946 obj_retry.go:303] \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:11:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.055579 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.070166 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.080652 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.089834 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.102838 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.112481 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.118464 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:10:52.977380888 +0000 UTC Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.129793 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aeeeace-d76b-4946-b3f7-6949da150692\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d781bfecbeb39261bba4fc9d1b03f8f1cbed369f7853aaff900ef7ca37a00b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badfa69a0b27c1fd766d08c5c18d316f1adadb6f12c5f2a80915ab8b37e46cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ec651c7187f3afedf843d7538ba88e9fc4aa1619fe11e3a4e61cc570c91443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a48180980ed67d1755a75c9dbac5b58faa20142beff57ef5e9a53881606129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5abadbe7456206a92ef9f0ab4c940622079e3d05f062cfeebea199db607be49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.134900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.134949 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.134960 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.134977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.134988 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.145872 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.154989 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.164350 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.172957 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.190204 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.206000 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.218681 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:16Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.237514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.237800 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.237899 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.237998 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.238100 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.340776 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.340803 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.340810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.340822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.340831 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.443119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.443155 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.443166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.443182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.443191 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.546391 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.546495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.546516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.546538 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.546556 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.649258 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.650003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.650081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.650157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.650236 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.753090 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.753140 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.753151 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.753165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.753174 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.855825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.856281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.856357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.856459 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.856533 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.959605 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.959668 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.959687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.959712 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.959729 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:16Z","lastTransitionTime":"2026-02-19T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.976248 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.976287 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:16 crc kubenswrapper[4861]: I0219 13:11:16.976285 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:16 crc kubenswrapper[4861]: E0219 13:11:16.976709 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:16 crc kubenswrapper[4861]: E0219 13:11:16.976811 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:16 crc kubenswrapper[4861]: E0219 13:11:16.976556 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.061668 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.061965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.062033 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.062108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.062171 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.118643 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:50:29.321855591 +0000 UTC Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.164881 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.164928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.164939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.164953 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.164962 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.267740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.267812 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.267835 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.267867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.267893 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.370195 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.370308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.370326 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.370350 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.370368 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.473741 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.473769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.473776 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.473788 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.473797 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.576387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.576762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.576974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.577169 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.577372 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.680554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.680813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.680874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.680953 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.681022 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.784465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.784571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.784636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.784663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.784721 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.887745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.887775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.887783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.887797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.887806 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.976706 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:17 crc kubenswrapper[4861]: E0219 13:11:17.976908 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.990547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.990604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.990618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.990637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:17 crc kubenswrapper[4861]: I0219 13:11:17.990650 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:17Z","lastTransitionTime":"2026-02-19T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.094501 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.094548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.094561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.094579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.094590 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.119239 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:25:19.609544163 +0000 UTC Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.198749 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.198835 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.198862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.198893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.198917 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.302171 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.302235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.302297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.302325 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.302352 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.405885 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.405966 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.405984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.406013 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.406032 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.508737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.508774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.508786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.508804 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.508814 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.612308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.612360 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.612374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.612393 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.612405 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.715218 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.715295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.715318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.715347 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.715370 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.818175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.818248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.818264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.818283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.818294 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.921120 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.921171 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.921181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.921197 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.921208 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:18Z","lastTransitionTime":"2026-02-19T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.976715 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.976721 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:18 crc kubenswrapper[4861]: I0219 13:11:18.976934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:18 crc kubenswrapper[4861]: E0219 13:11:18.977028 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:18 crc kubenswrapper[4861]: E0219 13:11:18.977108 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:18 crc kubenswrapper[4861]: E0219 13:11:18.977209 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.023934 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.023987 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.023998 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.024015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.024029 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.120472 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:13:01.290474951 +0000 UTC Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.126766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.126824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.126843 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.126869 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.126893 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.229698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.229735 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.229751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.229768 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.229782 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.333190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.333265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.333289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.333333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.333365 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.436355 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.436403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.436415 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.436457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.436469 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.539214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.539273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.539291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.539313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.539331 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.641173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.641239 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.641264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.641292 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.641316 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.743353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.743394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.743404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.743436 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.743448 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.846202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.846264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.846278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.846296 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.846310 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.948267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.948309 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.948321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.948339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.948350 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:19Z","lastTransitionTime":"2026-02-19T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.976826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:19 crc kubenswrapper[4861]: E0219 13:11:19.977236 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:19 crc kubenswrapper[4861]: I0219 13:11:19.977411 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:11:19 crc kubenswrapper[4861]: E0219 13:11:19.977576 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.051546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.051583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.051593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.051608 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.051617 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.120760 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:26:06.668291109 +0000 UTC Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.154520 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.154572 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.154589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.154611 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.154630 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.257702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.257756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.257774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.257800 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.257818 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.360958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.361036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.361063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.361093 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.361115 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.463579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.463610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.463618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.463632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.463641 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.566634 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.566705 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.566723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.566746 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.566765 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.669346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.669406 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.669455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.669481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.669497 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.772211 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.772270 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.772287 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.772314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.772331 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.828507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.828553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.828565 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.828583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.828595 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.851837 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.857645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.857796 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.858032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.858257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.858358 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.875604 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.881035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.881089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.881107 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.881134 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.881153 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.901071 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.906575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.906832 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.906962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.907111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.907237 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.926489 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.931631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.931686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.931703 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.931726 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.931740 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.950111 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:20Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.950255 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.953007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.953250 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.953265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.953281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.953294 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:20Z","lastTransitionTime":"2026-02-19T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.976769 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.976796 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.976938 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.977081 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:20 crc kubenswrapper[4861]: I0219 13:11:20.977532 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:20 crc kubenswrapper[4861]: E0219 13:11:20.977830 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.057395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.057446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.057456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.057468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.057477 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.121795 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:44:51.473786388 +0000 UTC Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.160487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.160519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.160527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.160540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.160550 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.263223 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.263507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.263596 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.263666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.263742 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.366924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.366975 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.366991 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.367018 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.367040 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.471089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.471156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.471177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.471200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.471217 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.573811 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.573870 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.573891 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.573914 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.573931 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.676938 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.677011 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.677037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.677067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.677090 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.780269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.780307 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.780318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.780332 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.780340 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.882822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.882884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.882903 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.882929 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.882949 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.977006 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:21 crc kubenswrapper[4861]: E0219 13:11:21.977218 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.985059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.985125 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.985143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.985169 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:21 crc kubenswrapper[4861]: I0219 13:11:21.985186 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:21Z","lastTransitionTime":"2026-02-19T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.088747 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.088825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.088847 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.089094 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.089136 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.122911 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:59:56.922882849 +0000 UTC Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.191529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.191595 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.191608 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.191625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.191637 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.294132 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.294172 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.294181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.294194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.294204 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.396931 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.396970 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.396979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.396994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.397003 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.499715 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.499774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.499792 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.499816 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.499833 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.602357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.602469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.602500 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.602530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.602553 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.704356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.704414 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.704463 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.704495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.704514 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.807776 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.807836 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.807855 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.807895 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.807909 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.910819 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.910874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.910890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.910914 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.910956 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:22Z","lastTransitionTime":"2026-02-19T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.976809 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.976864 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:22 crc kubenswrapper[4861]: I0219 13:11:22.976818 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:22 crc kubenswrapper[4861]: E0219 13:11:22.976953 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:22 crc kubenswrapper[4861]: E0219 13:11:22.977142 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:22 crc kubenswrapper[4861]: E0219 13:11:22.977215 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.014670 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.014723 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.014740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.014765 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.014781 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.118023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.118131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.118199 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.118233 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.118311 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.123072 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:31:00.08611995 +0000 UTC Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.220977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.221044 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.221067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.221096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.221118 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.324314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.324382 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.324405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.324474 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.324500 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.428156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.428237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.428258 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.428282 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.428300 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.530782 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.530869 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.530896 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.530924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.530940 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.634286 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.634347 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.634370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.634392 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.634410 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.737320 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.737386 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.737402 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.737462 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.737488 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.840613 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.840677 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.840697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.840721 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.840739 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.943987 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.944143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.944168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.944201 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.944223 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:23Z","lastTransitionTime":"2026-02-19T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:23 crc kubenswrapper[4861]: I0219 13:11:23.976890 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:23 crc kubenswrapper[4861]: E0219 13:11:23.977285 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.047387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.047528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.047549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.047570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.047586 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.123769 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:40:22.229125448 +0000 UTC Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.149751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.149800 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.149813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.149830 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.149842 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.252231 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.252291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.252307 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.252331 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.252348 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.355048 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.355093 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.355105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.355122 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.355132 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.457893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.457943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.457955 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.457974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.457986 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.559852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.559897 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.559906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.559922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.559934 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.665201 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.665284 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.665298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.665355 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.665389 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.768785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.768851 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.768874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.768903 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.768925 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.871563 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.871612 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.871624 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.871642 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.871652 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.974989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.975057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.975080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.975115 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.975139 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:24Z","lastTransitionTime":"2026-02-19T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.976373 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.976381 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:24 crc kubenswrapper[4861]: I0219 13:11:24.976402 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:24 crc kubenswrapper[4861]: E0219 13:11:24.976573 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:24 crc kubenswrapper[4861]: E0219 13:11:24.976789 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:24 crc kubenswrapper[4861]: E0219 13:11:24.976899 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.078454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.078528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.078549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.078577 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.078595 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.124187 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:18:32.272951601 +0000 UTC Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.182354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.182405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.182448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.182476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.182492 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.285292 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.285353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.285367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.285388 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.285402 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.388323 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.388371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.388383 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.388401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.388413 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.491145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.491187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.491196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.491209 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.491219 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.593665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.593713 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.593725 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.593743 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.593755 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.697035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.697091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.697107 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.697130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.697148 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.800546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.800636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.800649 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.800674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.800688 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.904658 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.904718 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.904737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.904762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.904781 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:25Z","lastTransitionTime":"2026-02-19T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:25 crc kubenswrapper[4861]: I0219 13:11:25.977395 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:25 crc kubenswrapper[4861]: E0219 13:11:25.977653 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.005785 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aeeeace-d76b-4946-b3f7-6949da150692\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d781bfecbeb39261bba4fc9d1b03f8f1cbed369f7853aaff900ef7ca37a00b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badfa69a0b27c1fd766d08c5c18d316f1adadb6f12c5f2a80915ab8b37e46cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ec651c7187f3afedf843d7538ba88e9fc4aa1619fe11e3a4e61cc570c91443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a48180980ed67d1755a75c9dbac5b58faa20142beff57ef5e9a53881606129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5abadbe7456206a92ef9f0ab4c940622079e3d05f062cfeebea199db607be49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1c5a47b0dfcd797d27e124b9177fd1a18dc33b184b450147a722194601fb537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f95f6f5074dff8b86ddf6b24bf229f0e341d77ad4efb805ab716190ca05966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709e57872588f21dbac5f828aa88be2080b32eaa552bb75d62294ed0a4c3be10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.007890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.007948 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.007965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.008621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.008656 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.029644 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c87b0fd90f0d5efba053b5b941064d6d08833684d55ce2900ce2e385bab611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.045997 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tcrxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a652269a-f440-45e1-bae5-29a3dfab4f51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7dcdbe2e1faee0818b95cfd9924235c9c51b58d690485b905d487481ea2b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xx5d7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tcrxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.060874 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"163fc0e2-f792-4062-88a7-3ed764a08103\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kjwt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.074246 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602146d1-e108-4bc5-b154-7d97a9d8c467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ac5f751756070f5a790893b4fd9f7d717ed9af254250ac73fd48f8f9d790fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c61c4908f4b667bbb8768512bb757c2500ff16e55e1106f4159a0a6565845bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.095519 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4599206a-22ea-4e74-acf8-fe2814bd0e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T13:10:02Z\\\",\\\"message\\\":\\\"W0219 13:09:51.382303 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 13:09:51.382760 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771506591 cert, and key in /tmp/serving-cert-3589299721/serving-signer.crt, /tmp/serving-cert-3589299721/serving-signer.key\\\\nI0219 13:09:51.860808 1 observer_polling.go:159] Starting file observer\\\\nW0219 13:09:51.862300 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 13:09:51.862755 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 13:09:51.863529 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3589299721/tls.crt::/tmp/serving-cert-3589299721/tls.key\\\\\\\"\\\\nF0219 13:10:02.207804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.111153 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.111235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.111271 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.111303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.111327 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.116549 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7ca9a955c828ea84f1319b4dfeee852edbfba71c7b1ad2679efd03008860db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18e7c8bb752505f8b0cfa7075258d558fc8b8bda864c4554109f1be6a17fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.124484 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:18:23.251075216 +0000 UTC Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.135108 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ffskh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:02Z\\\",\\\"message\\\":\\\"2026-02-19T13:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b\\\\n2026-02-19T13:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_33fcfffd-6872-41b0-8cfe-02db9577098b to /host/opt/cni/bin/\\\\n2026-02-19T13:10:17Z [verbose] multus-daemon started\\\\n2026-02-19T13:10:17Z [verbose] Readiness Indicator file check\\\\n2026-02-19T13:11:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxqsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ffskh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.153487 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42e92be-3464-4ab7-91b4-e38725481052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://587d3dba1cf6d9d16333754bd982095e2a17e369c418322d270b85a989226469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090fc5b8310f9df893e4a43147cdad56602fed0d97c2313d64da8c403c4cb9ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b63902d979d163267a7f53c8ed2cc4c93f24285be783a3d66ba86028d19921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.169391 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd90f8c7-a6a1-4fd9-be50-c8d386066ecd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374a70175dbefba8dcf2f5efbc5a822fc0c8889fb5ec0c634109b12cbac4111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5fd7534665e46f03e90d032f43b20b131d640035418116e9c561ea8290f83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e624f4e7dff2715925374ba4efcf3b4bf3f7cc626dfddceced300dc2392eaa92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027ae849216b720e4a482b9ac7629550fa1d01be0bdf2757c160dac0643ac67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:09:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:09:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.192752 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.215894 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.215957 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.215981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.216010 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.216029 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.216085 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.251394 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4f740d-a1ca-450f-adad-afb42efe0c76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T13:11:06Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 13:11:06.956931 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956940 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956947 6946 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0219 13:11:06.956972 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0219 13:11:06.956978 6946 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0219 13:11:06.956988 6946 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.956994 6946 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957000 6946 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ffskh in node crc\\\\nI0219 13:11:06.957005 6946 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ffskh after 0 failed attempt(s)\\\\nI0219 13:11:06.957010 6946 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ffskh\\\\nI0219 13:11:06.957025 6946 obj_retry.go:303] \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T13:11:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wb9bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.272203 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de22a290-251d-48e9-95e8-f4dbebd04451\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd93ee70e80340ccdd5b6c229dcd78e86eccb137368addb41781d931ad54507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452bcb019fab43ee8af55567145d64a915f1a1cf4bcc71d38f84240689f16a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdl8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8rwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.292531 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.311395 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dea1f39c2020dbfadb2ab50e39ef38f5e2baf65198f7bcf3712270468d51ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.319539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.319630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.319657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.319692 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.319717 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.331294 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478e6971-05ac-43f2-99a2-cd93644c6227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62395c9ed735d7d32e2fd00a458fdc505da405d92284a65455ce229fb9e4ee90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cfql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lwqpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.357219 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f491f78c-b995-44c7-8395-41d8a2c4cf29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1177130c68cb37f6c2057bcefc307690fd3ee5cd4e5e1992044aad2a199ab83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d0852e62d89d3faff253fa098197693f68a63e4e6a901f8efa96ee6fc3ffd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f87f4e8e540b7dc30470d058f8386775650848a0c3ba2fe9a2c729641d3ee10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdde15de4ca84ff0cfaad7aedab162df25cea18befb3fe5135b5344d60be0eef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb080bb22b2479975433f181bf6feedf767017f58ef52ceb4c77774d48fbc286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5b8625a6fb54e8f81f4e665afb9ff54619dea83afd70c032b152656efec464\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6bb3f122f08c1d03f285822e68aa83ecb5594ed201f2d24390d6e60df93a27e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T13:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T13:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z2rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhpx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.373650 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gwkfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e143d63f-12e2-4d59-9d2d-11057486b27e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T13:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b766c80d17e9de5d2cdf9a527f94d11bfff3a4c7bdd7d2cfa2f437316142f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T13:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T13:10:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gwkfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:26Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.422576 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.422665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.422686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.422710 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.422727 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.525660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.525726 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.525737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.525758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.525771 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.628078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.628127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.628142 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.628161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.628175 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.732131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.732216 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.732240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.732521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.732548 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.835989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.836035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.836046 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.836059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.836068 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.938333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.938376 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.938384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.938399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.938409 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:26Z","lastTransitionTime":"2026-02-19T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.976746 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.976857 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:26 crc kubenswrapper[4861]: E0219 13:11:26.976899 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:26 crc kubenswrapper[4861]: I0219 13:11:26.976773 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:26 crc kubenswrapper[4861]: E0219 13:11:26.977062 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:26 crc kubenswrapper[4861]: E0219 13:11:26.977241 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.041368 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.041545 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.041576 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.041604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.041625 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.125442 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:23:33.610500458 +0000 UTC Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.145014 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.145072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.145089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.145111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.145127 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.247862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.247926 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.247944 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.247980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.247997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.350503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.350561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.350580 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.350604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.350625 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.453965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.454081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.454100 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.454124 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.454142 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.557521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.557645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.557673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.557706 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.557732 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.660231 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.660303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.660328 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.660364 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.660389 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.763101 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.763143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.763152 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.763168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.763177 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.865812 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.865879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.865901 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.865930 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.865952 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.968769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.968810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.968822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.968837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.968847 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:27Z","lastTransitionTime":"2026-02-19T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:27 crc kubenswrapper[4861]: I0219 13:11:27.976465 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:27 crc kubenswrapper[4861]: E0219 13:11:27.976693 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.071284 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.071341 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.071362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.071392 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.071415 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.125726 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:57:25.162404535 +0000 UTC Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.174067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.174114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.174128 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.174149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.174161 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.277672 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.277751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.277776 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.277808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.277831 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.380574 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.380669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.380695 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.380722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.380739 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.483510 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.483581 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.483603 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.483635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.483658 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.586491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.586653 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.586681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.586710 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.586730 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.690872 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.691036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.691058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.691083 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.691100 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.794751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.794834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.794858 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.794889 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.794907 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.898225 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.898301 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.898317 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.898344 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.898361 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:28Z","lastTransitionTime":"2026-02-19T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.977030 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.977115 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:28 crc kubenswrapper[4861]: E0219 13:11:28.977187 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:28 crc kubenswrapper[4861]: I0219 13:11:28.977030 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:28 crc kubenswrapper[4861]: E0219 13:11:28.977267 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:28 crc kubenswrapper[4861]: E0219 13:11:28.977583 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.001347 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.001397 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.001488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.001524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.001546 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.105057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.105147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.105171 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.105203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.105227 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.127074 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:57:40.584581179 +0000 UTC Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.207910 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.207989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.208007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.208031 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.208049 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.311077 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.311200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.311224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.311250 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.311267 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.414369 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.414395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.414404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.414441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.414451 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.516724 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.516813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.516837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.516870 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.516897 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.619366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.619459 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.619478 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.619501 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.619519 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.728096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.728141 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.728156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.728176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.728190 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.830494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.830548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.830562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.830584 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.830600 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.933954 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.934014 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.934032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.934058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.934076 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:29Z","lastTransitionTime":"2026-02-19T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:29 crc kubenswrapper[4861]: I0219 13:11:29.976688 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:29 crc kubenswrapper[4861]: E0219 13:11:29.976876 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.036905 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.036973 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.036992 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.037020 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.037038 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.127821 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:26:07.545160549 +0000 UTC Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.140106 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.140156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.140168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.140185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.140197 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.243117 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.243190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.243213 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.243247 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.243272 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.347655 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.347721 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.347739 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.347765 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.347785 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.451442 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.451479 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.451488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.451503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.451514 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.553936 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.553995 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.554012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.554036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.554052 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.656308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.656363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.656381 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.656403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.656462 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.759927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.760088 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.760108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.760135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.760152 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.863190 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.863255 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.863275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.863304 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.863321 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.966504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.966556 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.966572 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.966590 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.966605 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:30Z","lastTransitionTime":"2026-02-19T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.976829 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.976859 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.976903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:30 crc kubenswrapper[4861]: E0219 13:11:30.977368 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:30 crc kubenswrapper[4861]: E0219 13:11:30.977463 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:30 crc kubenswrapper[4861]: I0219 13:11:30.977841 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:11:30 crc kubenswrapper[4861]: E0219 13:11:30.977849 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:30 crc kubenswrapper[4861]: E0219 13:11:30.978024 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.068194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.068270 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.068333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.068367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.068382 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.083054 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:31Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.087308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.087339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.087372 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.087390 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.087402 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.099266 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:31Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.102741 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.102784 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.102798 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.102818 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.102829 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.117663 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:31Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.121543 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.121624 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.121644 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.121663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.121678 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.128329 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:22:05.291752417 +0000 UTC Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.136544 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:31Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.140596 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.140644 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.140656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.140674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.140686 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.153276 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T13:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b471f118-2cdd-4d29-a235-2083985944a7\\\",\\\"systemUUID\\\":\\\"4c20662d-b7a0-4257-bc2e-597d65530c8e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T13:11:31Z is after 2025-08-24T17:21:41Z" Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.153455 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.154886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.154923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.154938 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.154958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.155011 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.258705 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.258769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.258787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.258810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.258827 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.361025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.361063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.361072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.361085 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.361094 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.463735 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.463781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.463793 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.463810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.463822 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.565996 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.566041 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.566053 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.566069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.566081 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.673196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.673257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.673273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.673296 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.673314 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.772848 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.773026 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.773137 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs podName:163fc0e2-f792-4062-88a7-3ed764a08103 nodeName:}" failed. No retries permitted until 2026-02-19 13:12:35.773114289 +0000 UTC m=+170.434217527 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs") pod "network-metrics-daemon-kjwt5" (UID: "163fc0e2-f792-4062-88a7-3ed764a08103") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.775222 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.775266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.775276 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.775293 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.775305 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.877268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.877330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.877346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.877367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.877383 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.976627 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:31 crc kubenswrapper[4861]: E0219 13:11:31.976806 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.980552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.980599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.980619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.980642 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:31 crc kubenswrapper[4861]: I0219 13:11:31.980660 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:31Z","lastTransitionTime":"2026-02-19T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.082842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.082880 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.082890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.082904 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.082915 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.129264 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:44:05.31643147 +0000 UTC Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.185517 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.185562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.185576 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.185593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.185606 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.287576 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.287624 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.287636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.287654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.287667 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.391781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.391834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.391847 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.391867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.391881 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.494405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.494466 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.494478 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.494495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.494507 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.596972 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.597038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.597048 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.597065 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.597078 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.699816 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.699854 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.699863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.699876 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.699888 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.802570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.802627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.802636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.802654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.802668 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.905461 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.905494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.905504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.905520 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.905533 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:32Z","lastTransitionTime":"2026-02-19T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.976450 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.976511 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:32 crc kubenswrapper[4861]: I0219 13:11:32.976642 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:32 crc kubenswrapper[4861]: E0219 13:11:32.976755 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:32 crc kubenswrapper[4861]: E0219 13:11:32.976912 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:32 crc kubenswrapper[4861]: E0219 13:11:32.977004 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.008343 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.008463 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.008491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.008523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.008549 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.111953 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.112032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.112056 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.112092 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.112116 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.130098 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:26:26.253258117 +0000 UTC Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.215321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.215379 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.215396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.215446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.215465 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.318781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.318819 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.318829 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.318868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.318878 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.421894 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.421945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.421962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.422002 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.422022 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.524920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.524968 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.524985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.525008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.525026 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.627043 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.627076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.627084 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.627098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.627109 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.729573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.729620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.729631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.729649 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.729663 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.832246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.832272 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.832281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.832295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.832303 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.934688 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.934742 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.934755 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.934772 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.934784 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:33Z","lastTransitionTime":"2026-02-19T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:33 crc kubenswrapper[4861]: I0219 13:11:33.976526 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:33 crc kubenswrapper[4861]: E0219 13:11:33.976756 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.037547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.037601 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.037613 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.037631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.037643 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.131164 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:21:36.559419664 +0000 UTC Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.139971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.140017 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.140032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.140052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.140068 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.243152 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.243231 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.243260 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.243290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.243311 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.345902 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.345950 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.345962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.345979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.345992 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.449149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.449204 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.449223 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.449246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.449263 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.552349 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.552476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.552502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.552531 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.552553 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.655102 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.655144 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.655155 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.655172 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.655183 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.758132 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.758164 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.758177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.758193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.758204 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.860261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.860291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.860300 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.860313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.860321 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.963367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.963401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.963413 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.963456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.963470 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:34Z","lastTransitionTime":"2026-02-19T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.976837 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.976866 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:34 crc kubenswrapper[4861]: I0219 13:11:34.976978 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:34 crc kubenswrapper[4861]: E0219 13:11:34.977174 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:34 crc kubenswrapper[4861]: E0219 13:11:34.977252 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:34 crc kubenswrapper[4861]: E0219 13:11:34.977357 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.065797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.065842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.065857 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.065877 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.065893 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.132022 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:35:22.452988524 +0000 UTC Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.168558 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.168599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.168612 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.168628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.168641 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.271890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.271928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.271937 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.271951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.271961 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.375200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.375274 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.375295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.375326 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.375353 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.477461 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.477506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.477514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.477527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.477535 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.580936 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.581001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.581025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.581056 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.581079 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.684999 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.685081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.685105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.685136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.685159 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.787226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.787266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.787278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.787295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.787307 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.889862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.889931 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.889951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.889979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.890016 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.976353 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:35 crc kubenswrapper[4861]: E0219 13:11:35.976645 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.993702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.993737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.993748 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.993764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:35 crc kubenswrapper[4861]: I0219 13:11:35.993775 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:35Z","lastTransitionTime":"2026-02-19T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.000392 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gwkfm" podStartSLOduration=83.000359507 podStartE2EDuration="1m23.000359507s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:35.999967035 +0000 UTC m=+110.661070283" watchObservedRunningTime="2026-02-19 13:11:36.000359507 +0000 UTC m=+110.661462795" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.049733 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podStartSLOduration=83.049703798 podStartE2EDuration="1m23.049703798s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.049338306 +0000 UTC m=+110.710441554" watchObservedRunningTime="2026-02-19 13:11:36.049703798 +0000 UTC m=+110.710807066" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.074059 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mhpx8" podStartSLOduration=83.074030282 podStartE2EDuration="1m23.074030282s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.073469556 +0000 UTC m=+110.734572814" watchObservedRunningTime="2026-02-19 13:11:36.074030282 +0000 UTC m=+110.735133550" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.096009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.096068 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.096084 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.096107 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.096122 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.125637 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.125618361 podStartE2EDuration="30.125618361s" podCreationTimestamp="2026-02-19 13:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.124984931 +0000 UTC m=+110.786088189" watchObservedRunningTime="2026-02-19 13:11:36.125618361 +0000 UTC m=+110.786721589" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.136315 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:55:16.276267282 +0000 UTC Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.175250 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tcrxv" podStartSLOduration=84.175229719 podStartE2EDuration="1m24.175229719s" podCreationTimestamp="2026-02-19 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.174759365 +0000 UTC m=+110.835862603" watchObservedRunningTime="2026-02-19 13:11:36.175229719 +0000 UTC m=+110.836332947" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.197946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.197997 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.198009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.198027 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.198039 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.218398 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.218377702 podStartE2EDuration="41.218377702s" podCreationTimestamp="2026-02-19 13:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.203911476 +0000 UTC m=+110.865014704" watchObservedRunningTime="2026-02-19 13:11:36.218377702 +0000 UTC m=+110.879480930" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.232487 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.232467608 podStartE2EDuration="1m27.232467608s" podCreationTimestamp="2026-02-19 13:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.218598369 +0000 UTC m=+110.879701587" watchObservedRunningTime="2026-02-19 13:11:36.232467608 +0000 UTC m=+110.893570846" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.249656 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ffskh" podStartSLOduration=83.249639686 podStartE2EDuration="1m23.249639686s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.249450681 +0000 UTC m=+110.910553919" watchObservedRunningTime="2026-02-19 13:11:36.249639686 +0000 UTC m=+110.910742914" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.291543 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8rwhb" podStartSLOduration=83.291521571 podStartE2EDuration="1m23.291521571s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.291366717 +0000 UTC m=+110.952469955" watchObservedRunningTime="2026-02-19 13:11:36.291521571 +0000 UTC m=+110.952624799" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.300054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.300100 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.300114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.300131 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.300143 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.305085 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.305065831 podStartE2EDuration="1m22.305065831s" podCreationTimestamp="2026-02-19 13:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.304238986 +0000 UTC m=+110.965342214" watchObservedRunningTime="2026-02-19 13:11:36.305065831 +0000 UTC m=+110.966169059" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.315498 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.315485375 podStartE2EDuration="59.315485375s" podCreationTimestamp="2026-02-19 13:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:36.314808315 +0000 UTC m=+110.975911543" watchObservedRunningTime="2026-02-19 13:11:36.315485375 +0000 UTC m=+110.976588603" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.401789 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.401818 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.401826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.401839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.401847 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.504729 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.504771 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.504780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.504794 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.504805 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.606281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.606314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.606322 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.606334 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.606343 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.708549 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.708609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.708626 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.708650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.708666 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.811242 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.811298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.811310 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.811327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.811341 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.917494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.917561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.917579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.917604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.917621 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:36Z","lastTransitionTime":"2026-02-19T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.975929 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:36 crc kubenswrapper[4861]: E0219 13:11:36.976037 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.976125 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:36 crc kubenswrapper[4861]: I0219 13:11:36.976163 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:36 crc kubenswrapper[4861]: E0219 13:11:36.976535 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:36 crc kubenswrapper[4861]: E0219 13:11:36.976809 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.020359 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.020444 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.020462 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.020483 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.020499 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.123241 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.123290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.123299 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.123314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.123323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.136834 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:08:59.9225429 +0000 UTC Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.226639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.226682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.226692 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.226708 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.226721 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.330053 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.330127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.330141 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.330163 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.330558 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.434515 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.434561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.434569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.434584 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.434593 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.536811 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.536865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.536881 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.536904 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.536921 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.639438 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.639481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.639497 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.639517 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.639530 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.745564 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.745604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.745614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.745627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.745636 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.848712 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.848741 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.848750 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.848764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.848775 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.951795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.951838 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.951850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.951867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.951893 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:37Z","lastTransitionTime":"2026-02-19T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:37 crc kubenswrapper[4861]: I0219 13:11:37.976939 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:37 crc kubenswrapper[4861]: E0219 13:11:37.977148 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.054154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.054251 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.054288 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.054316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.054339 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.137048 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:35:14.865284819 +0000 UTC Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.156678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.156738 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.156756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.156781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.156798 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.259189 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.259218 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.259253 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.259267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.259278 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.361923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.361980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.362002 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.362032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.362051 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.464829 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.464888 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.464906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.464934 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.464956 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.568773 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.568885 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.568904 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.568932 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.568952 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.671911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.671977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.671990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.672008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.672020 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.774697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.774767 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.774779 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.774824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.774839 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.878546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.878620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.878635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.878652 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.878664 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.976825 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.976876 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.976912 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:38 crc kubenswrapper[4861]: E0219 13:11:38.976989 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:38 crc kubenswrapper[4861]: E0219 13:11:38.977080 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:38 crc kubenswrapper[4861]: E0219 13:11:38.977192 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.980939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.980982 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.980994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.981008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:38 crc kubenswrapper[4861]: I0219 13:11:38.981018 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:38Z","lastTransitionTime":"2026-02-19T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.083591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.083671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.083696 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.083734 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.083758 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.137987 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:07:27.152220863 +0000 UTC Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.186318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.186382 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.186410 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.186494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.186519 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.289316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.289354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.289363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.289377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.289385 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.392801 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.392862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.392879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.392904 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.392920 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.496263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.496335 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.496375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.496406 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.496484 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.599152 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.599193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.599202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.599215 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.599225 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.701881 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.701930 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.701940 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.701958 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.701969 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.804878 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.804928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.804939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.804957 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.804970 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.907593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.907643 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.907660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.907685 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.907702 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:39Z","lastTransitionTime":"2026-02-19T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:39 crc kubenswrapper[4861]: I0219 13:11:39.976597 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:39 crc kubenswrapper[4861]: E0219 13:11:39.977040 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.010082 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.010142 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.010159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.010181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.010197 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.118219 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.118297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.118319 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.118346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.118373 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.138241 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:32:30.303066604 +0000 UTC Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.221513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.221593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.221618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.221650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.221674 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.326266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.326333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.326356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.326385 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.326407 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.429394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.429508 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.429542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.429574 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.429595 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.533582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.533660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.533679 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.533705 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.533724 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.637823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.637956 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.637979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.638005 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.638027 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.741186 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.741269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.741290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.741318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.741336 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.844003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.844056 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.844074 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.844094 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.844110 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.947204 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.947241 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.947252 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.947269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.947279 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:40Z","lastTransitionTime":"2026-02-19T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.977055 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.977078 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:40 crc kubenswrapper[4861]: E0219 13:11:40.977253 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:40 crc kubenswrapper[4861]: I0219 13:11:40.977091 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:40 crc kubenswrapper[4861]: E0219 13:11:40.977477 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:40 crc kubenswrapper[4861]: E0219 13:11:40.977532 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.050254 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.050344 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.050380 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.050458 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.050484 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:41Z","lastTransitionTime":"2026-02-19T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.139137 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:16:40.718883713 +0000 UTC Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.153881 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.153962 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.153982 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.154011 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.154035 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:41Z","lastTransitionTime":"2026-02-19T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.257150 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.257229 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.257247 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.257278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.257299 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:41Z","lastTransitionTime":"2026-02-19T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.293865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.293912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.293923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.293940 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.293954 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T13:11:41Z","lastTransitionTime":"2026-02-19T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.366834 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq"] Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.367550 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.370708 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.370768 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.370994 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.371124 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.477993 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2cff54c4-e23c-4f4e-a2af-f7776b35e680-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.478050 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cff54c4-e23c-4f4e-a2af-f7776b35e680-service-ca\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.478073 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cff54c4-e23c-4f4e-a2af-f7776b35e680-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.478108 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2cff54c4-e23c-4f4e-a2af-f7776b35e680-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.478129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff54c4-e23c-4f4e-a2af-f7776b35e680-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579321 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cff54c4-e23c-4f4e-a2af-f7776b35e680-service-ca\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cff54c4-e23c-4f4e-a2af-f7776b35e680-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579394 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2cff54c4-e23c-4f4e-a2af-f7776b35e680-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579439 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff54c4-e23c-4f4e-a2af-f7776b35e680-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2cff54c4-e23c-4f4e-a2af-f7776b35e680-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579612 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2cff54c4-e23c-4f4e-a2af-f7776b35e680-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.579763 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2cff54c4-e23c-4f4e-a2af-f7776b35e680-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.580404 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cff54c4-e23c-4f4e-a2af-f7776b35e680-service-ca\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.589113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff54c4-e23c-4f4e-a2af-f7776b35e680-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.604095 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cff54c4-e23c-4f4e-a2af-f7776b35e680-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-75rfq\" (UID: \"2cff54c4-e23c-4f4e-a2af-f7776b35e680\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.685664 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.976148 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:41 crc kubenswrapper[4861]: E0219 13:11:41.976779 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:41 crc kubenswrapper[4861]: I0219 13:11:41.977002 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:11:41 crc kubenswrapper[4861]: E0219 13:11:41.977218 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wb9bn_openshift-ovn-kubernetes(2b4f740d-a1ca-450f-adad-afb42efe0c76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.139507 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:26:53.679521607 +0000 UTC Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.139558 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.145818 4861 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.615518 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" event={"ID":"2cff54c4-e23c-4f4e-a2af-f7776b35e680","Type":"ContainerStarted","Data":"43056ae6b54f7a01657057cb25e2df79a2eae9e1a41d680d333f21f6b95441f4"} Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.615591 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" event={"ID":"2cff54c4-e23c-4f4e-a2af-f7776b35e680","Type":"ContainerStarted","Data":"e076209a5e93d87e5579bdbac4c877c495a79c01d71bd03847653600840a3fe0"} Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.976850 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:42 crc kubenswrapper[4861]: E0219 13:11:42.977050 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.976850 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:42 crc kubenswrapper[4861]: I0219 13:11:42.976850 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:42 crc kubenswrapper[4861]: E0219 13:11:42.977233 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:42 crc kubenswrapper[4861]: E0219 13:11:42.977503 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:43 crc kubenswrapper[4861]: I0219 13:11:43.976574 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:43 crc kubenswrapper[4861]: E0219 13:11:43.976787 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:44 crc kubenswrapper[4861]: I0219 13:11:44.976906 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:44 crc kubenswrapper[4861]: I0219 13:11:44.977003 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:44 crc kubenswrapper[4861]: I0219 13:11:44.976915 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:44 crc kubenswrapper[4861]: E0219 13:11:44.977243 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:44 crc kubenswrapper[4861]: E0219 13:11:44.977548 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:44 crc kubenswrapper[4861]: E0219 13:11:44.977731 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:45 crc kubenswrapper[4861]: I0219 13:11:45.976170 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:45 crc kubenswrapper[4861]: E0219 13:11:45.976905 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:46 crc kubenswrapper[4861]: E0219 13:11:46.014721 4861 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 13:11:46 crc kubenswrapper[4861]: E0219 13:11:46.115266 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 13:11:46 crc kubenswrapper[4861]: I0219 13:11:46.976255 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:46 crc kubenswrapper[4861]: I0219 13:11:46.976295 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:46 crc kubenswrapper[4861]: E0219 13:11:46.976494 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:46 crc kubenswrapper[4861]: I0219 13:11:46.976624 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:46 crc kubenswrapper[4861]: E0219 13:11:46.976804 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:46 crc kubenswrapper[4861]: E0219 13:11:46.977148 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:47 crc kubenswrapper[4861]: I0219 13:11:47.976798 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:47 crc kubenswrapper[4861]: E0219 13:11:47.976944 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:48 crc kubenswrapper[4861]: I0219 13:11:48.976717 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:48 crc kubenswrapper[4861]: E0219 13:11:48.977572 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:48 crc kubenswrapper[4861]: I0219 13:11:48.976783 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:48 crc kubenswrapper[4861]: E0219 13:11:48.977673 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:48 crc kubenswrapper[4861]: I0219 13:11:48.976714 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:48 crc kubenswrapper[4861]: E0219 13:11:48.977761 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.646545 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/1.log" Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.647561 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/0.log" Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.647683 4861 generic.go:334] "Generic (PLEG): container finished" podID="1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb" containerID="4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a" exitCode=1 Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.647739 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerDied","Data":"4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a"} Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.647806 4861 scope.go:117] "RemoveContainer" containerID="2dd13ac7c222df19231190b8c21b8a3bd25fe6ac37f1d396956a54690f2e749b" Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.648630 4861 scope.go:117] "RemoveContainer" containerID="4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a" Feb 19 13:11:49 crc kubenswrapper[4861]: E0219 13:11:49.648966 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ffskh_openshift-multus(1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb)\"" pod="openshift-multus/multus-ffskh" podUID="1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb" Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.684897 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-75rfq" podStartSLOduration=96.684872101 podStartE2EDuration="1m36.684872101s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:42.645386637 +0000 UTC m=+117.306489925" watchObservedRunningTime="2026-02-19 13:11:49.684872101 +0000 UTC m=+124.345975369" Feb 19 13:11:49 crc kubenswrapper[4861]: I0219 13:11:49.976383 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:49 crc kubenswrapper[4861]: E0219 13:11:49.976628 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:50 crc kubenswrapper[4861]: I0219 13:11:50.654090 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/1.log" Feb 19 13:11:50 crc kubenswrapper[4861]: I0219 13:11:50.976792 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:50 crc kubenswrapper[4861]: I0219 13:11:50.976871 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:50 crc kubenswrapper[4861]: I0219 13:11:50.976792 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:50 crc kubenswrapper[4861]: E0219 13:11:50.976966 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:50 crc kubenswrapper[4861]: E0219 13:11:50.977097 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:50 crc kubenswrapper[4861]: E0219 13:11:50.977278 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:51 crc kubenswrapper[4861]: E0219 13:11:51.117308 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 13:11:51 crc kubenswrapper[4861]: I0219 13:11:51.976079 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:51 crc kubenswrapper[4861]: E0219 13:11:51.976515 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:52 crc kubenswrapper[4861]: I0219 13:11:52.976768 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:52 crc kubenswrapper[4861]: E0219 13:11:52.977056 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:52 crc kubenswrapper[4861]: I0219 13:11:52.977164 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:52 crc kubenswrapper[4861]: I0219 13:11:52.977285 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:52 crc kubenswrapper[4861]: E0219 13:11:52.978207 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:52 crc kubenswrapper[4861]: E0219 13:11:52.978326 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:52 crc kubenswrapper[4861]: I0219 13:11:52.978974 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:11:53 crc kubenswrapper[4861]: I0219 13:11:53.675523 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/3.log" Feb 19 13:11:53 crc kubenswrapper[4861]: I0219 13:11:53.679173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerStarted","Data":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} Feb 19 13:11:53 crc kubenswrapper[4861]: I0219 13:11:53.679585 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:11:53 crc kubenswrapper[4861]: I0219 13:11:53.718836 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podStartSLOduration=100.718811271 podStartE2EDuration="1m40.718811271s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:11:53.718209443 +0000 UTC m=+128.379312691" watchObservedRunningTime="2026-02-19 13:11:53.718811271 +0000 UTC m=+128.379914499" Feb 19 13:11:53 crc kubenswrapper[4861]: I0219 13:11:53.880225 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kjwt5"] Feb 19 13:11:53 crc kubenswrapper[4861]: I0219 13:11:53.880383 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:53 crc kubenswrapper[4861]: E0219 13:11:53.880580 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:54 crc kubenswrapper[4861]: I0219 13:11:54.976311 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:54 crc kubenswrapper[4861]: I0219 13:11:54.976366 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:54 crc kubenswrapper[4861]: E0219 13:11:54.976877 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:54 crc kubenswrapper[4861]: I0219 13:11:54.976468 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:54 crc kubenswrapper[4861]: E0219 13:11:54.976960 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:54 crc kubenswrapper[4861]: E0219 13:11:54.977205 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:55 crc kubenswrapper[4861]: I0219 13:11:55.976578 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:55 crc kubenswrapper[4861]: E0219 13:11:55.979014 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:56 crc kubenswrapper[4861]: E0219 13:11:56.118110 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 13:11:56 crc kubenswrapper[4861]: I0219 13:11:56.976760 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:56 crc kubenswrapper[4861]: I0219 13:11:56.976903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:56 crc kubenswrapper[4861]: E0219 13:11:56.977055 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:56 crc kubenswrapper[4861]: E0219 13:11:56.977210 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:56 crc kubenswrapper[4861]: I0219 13:11:56.977384 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:56 crc kubenswrapper[4861]: E0219 13:11:56.977642 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:57 crc kubenswrapper[4861]: I0219 13:11:57.976707 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:57 crc kubenswrapper[4861]: E0219 13:11:57.976993 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:11:58 crc kubenswrapper[4861]: I0219 13:11:58.976970 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:11:58 crc kubenswrapper[4861]: I0219 13:11:58.977123 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:11:58 crc kubenswrapper[4861]: E0219 13:11:58.977208 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:11:58 crc kubenswrapper[4861]: I0219 13:11:58.977321 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:11:58 crc kubenswrapper[4861]: E0219 13:11:58.977372 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:11:58 crc kubenswrapper[4861]: E0219 13:11:58.977670 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:11:59 crc kubenswrapper[4861]: I0219 13:11:59.976905 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:11:59 crc kubenswrapper[4861]: E0219 13:11:59.977386 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:12:00 crc kubenswrapper[4861]: I0219 13:12:00.977167 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:00 crc kubenswrapper[4861]: I0219 13:12:00.977196 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:00 crc kubenswrapper[4861]: E0219 13:12:00.977578 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:12:00 crc kubenswrapper[4861]: I0219 13:12:00.977214 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:00 crc kubenswrapper[4861]: E0219 13:12:00.977654 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:12:00 crc kubenswrapper[4861]: E0219 13:12:00.977981 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:12:01 crc kubenswrapper[4861]: E0219 13:12:01.119669 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 13:12:02 crc kubenswrapper[4861]: I0219 13:12:02.025155 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:02 crc kubenswrapper[4861]: E0219 13:12:02.025560 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:12:02 crc kubenswrapper[4861]: I0219 13:12:02.025928 4861 scope.go:117] "RemoveContainer" containerID="4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a" Feb 19 13:12:02 crc kubenswrapper[4861]: I0219 13:12:02.976573 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:02 crc kubenswrapper[4861]: I0219 13:12:02.976574 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:02 crc kubenswrapper[4861]: E0219 13:12:02.977368 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:12:02 crc kubenswrapper[4861]: I0219 13:12:02.976574 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:02 crc kubenswrapper[4861]: E0219 13:12:02.977568 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:12:02 crc kubenswrapper[4861]: E0219 13:12:02.977869 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:12:03 crc kubenswrapper[4861]: I0219 13:12:03.046557 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/1.log" Feb 19 13:12:03 crc kubenswrapper[4861]: I0219 13:12:03.046673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerStarted","Data":"1fb40728a849ae1a3b7a5f7e05b9b5e01c9049dfac4cd91ea977686bd736c79b"} Feb 19 13:12:03 crc kubenswrapper[4861]: I0219 13:12:03.976602 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:03 crc kubenswrapper[4861]: E0219 13:12:03.976882 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:12:04 crc kubenswrapper[4861]: I0219 13:12:04.976457 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:04 crc kubenswrapper[4861]: I0219 13:12:04.976531 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:04 crc kubenswrapper[4861]: I0219 13:12:04.976561 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:04 crc kubenswrapper[4861]: E0219 13:12:04.976719 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 13:12:04 crc kubenswrapper[4861]: E0219 13:12:04.976893 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 13:12:04 crc kubenswrapper[4861]: E0219 13:12:04.977047 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 13:12:05 crc kubenswrapper[4861]: I0219 13:12:05.976599 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:05 crc kubenswrapper[4861]: E0219 13:12:05.978562 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kjwt5" podUID="163fc0e2-f792-4062-88a7-3ed764a08103" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.976082 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.976227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.976314 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.981455 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.981839 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.981927 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 13:12:06 crc kubenswrapper[4861]: I0219 13:12:06.981947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 13:12:07 crc kubenswrapper[4861]: I0219 13:12:07.109987 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:12:07 crc kubenswrapper[4861]: I0219 13:12:07.976782 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:07 crc kubenswrapper[4861]: I0219 13:12:07.980177 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 13:12:07 crc kubenswrapper[4861]: I0219 13:12:07.980657 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.223722 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.260343 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2d8vs"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.261092 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.263363 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.264104 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.273157 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.274096 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.274572 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.274918 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.274963 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pwhxn"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.275022 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.275337 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.275510 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.276240 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.276597 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.276609 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.276692 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.276982 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.277088 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.277182 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.277218 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.277388 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.277964 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.278400 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6txts"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.279026 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.280653 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.281250 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rw85s"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.290671 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.295893 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.296264 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.296634 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.297702 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.299504 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.300040 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.300321 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.300457 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.307499 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.311161 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.317748 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.317774 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-89dlr"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.318009 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.318481 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.318915 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.319120 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.319498 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.319517 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-48755"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.319764 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.319783 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320022 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320269 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4bs6h"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320358 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320385 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320599 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320758 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.320816 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.321040 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.321260 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.322246 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.325923 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.328723 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.329007 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.331052 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.331876 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.332489 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.334106 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.335725 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336065 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336201 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336441 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336662 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336737 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gglfz"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336870 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.338339 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.336878 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.340929 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.341128 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.341953 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342134 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342153 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342445 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342624 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342702 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342823 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342881 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.342835 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.343082 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.343195 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.343290 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.343586 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.345180 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lgkf4"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348160 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-config\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348207 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348243 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348273 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348297 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e563434-f9d2-4932-a791-cfffe2de6e5b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hj79t\" (UID: \"3e563434-f9d2-4932-a791-cfffe2de6e5b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348319 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-oauth-serving-cert\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348341 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-dir\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348365 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-serving-cert\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348390 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dff\" (UniqueName: \"kubernetes.io/projected/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-kube-api-access-t8dff\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348409 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c66ae1-c442-4718-8663-6934eb402aea-serving-cert\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348473 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-serving-cert\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348500 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-machine-approver-tls\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348531 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbf7\" (UniqueName: \"kubernetes.io/projected/3e563434-f9d2-4932-a791-cfffe2de6e5b-kube-api-access-nvbf7\") pod \"cluster-samples-operator-665b6dd947-hj79t\" (UID: \"3e563434-f9d2-4932-a791-cfffe2de6e5b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348560 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllx2\" (UniqueName: \"kubernetes.io/projected/6be72510-28ab-44e6-ae93-6930c521dc8d-kube-api-access-jllx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348596 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jsch\" (UniqueName: \"kubernetes.io/projected/47c66ae1-c442-4718-8663-6934eb402aea-kube-api-access-4jsch\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348651 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348688 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348718 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348766 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-encryption-config\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348832 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348860 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cf100-c5b6-4b1d-80a0-48b7a2688d72-config\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348909 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348920 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348937 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.348977 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-config\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349002 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-config\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-policies\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349070 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349093 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-client-ca\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349141 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-client-ca\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349169 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-auth-proxy-config\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349215 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349243 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48gh\" (UniqueName: \"kubernetes.io/projected/db2cfad6-b1c8-46ee-8f79-6072ffb59471-kube-api-access-n48gh\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349276 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349302 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dklk\" (UniqueName: \"kubernetes.io/projected/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-kube-api-access-8dklk\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349330 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-service-ca\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349368 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349465 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-oauth-config\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349509 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-service-ca-bundle\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349579 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2js\" (UniqueName: \"kubernetes.io/projected/435cf100-c5b6-4b1d-80a0-48b7a2688d72-kube-api-access-8b2js\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59z2\" (UniqueName: \"kubernetes.io/projected/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-kube-api-access-c59z2\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349651 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349726 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349903 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.349995 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.356208 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.356476 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.356634 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.357043 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.357607 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.357815 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.357941 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.358083 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.358209 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.359563 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.361959 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.362122 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.362609 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.363036 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.363252 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.363332 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.363647 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.363869 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.365213 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.365731 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.367890 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.369143 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.369933 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.362623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvz49\" (UniqueName: \"kubernetes.io/projected/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-kube-api-access-vvz49\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.373383 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be72510-28ab-44e6-ae93-6930c521dc8d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.373429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.373726 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-trusted-ca-bundle\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.373752 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95z78\" (UniqueName: \"kubernetes.io/projected/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-kube-api-access-95z78\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.373786 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xj4b\" (UniqueName: \"kubernetes.io/projected/6716c3af-de13-4b1c-a2b6-ebb3b968c617-kube-api-access-9xj4b\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.374054 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-serving-cert\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.386459 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.377749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387250 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387283 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6be72510-28ab-44e6-ae93-6930c521dc8d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387309 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6716c3af-de13-4b1c-a2b6-ebb3b968c617-audit-dir\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387345 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/435cf100-c5b6-4b1d-80a0-48b7a2688d72-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387367 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2cfad6-b1c8-46ee-8f79-6072ffb59471-serving-cert\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387377 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387856 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.388247 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4hktn"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.388551 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.388849 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.389179 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6mmf"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.389581 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390003 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390128 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390155 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.387381 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-audit-policies\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390529 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8wj\" (UniqueName: \"kubernetes.io/projected/59847917-735f-49c7-99b2-599facec7e03-kube-api-access-jg8wj\") pod \"downloads-7954f5f757-rw85s\" (UID: \"59847917-735f-49c7-99b2-599facec7e03\") " pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390562 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390589 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-config\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390611 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/435cf100-c5b6-4b1d-80a0-48b7a2688d72-images\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-etcd-client\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-config\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390951 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.390971 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.391131 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.391268 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.391436 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.391683 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.391877 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.393637 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d27vq"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.394371 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.394622 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.394947 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.396890 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.396911 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.397789 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.399672 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.400851 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.401035 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.401416 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9zmzh"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.401632 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.401988 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.402502 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.402629 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.406108 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.406595 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rpbx6"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.407282 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.408190 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2d8vs"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.408457 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.408602 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.408813 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.408873 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.409071 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.409170 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.409229 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.410359 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.410675 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.413339 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.417316 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.420312 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.421567 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.421827 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.421936 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.423823 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.428093 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.454600 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.454871 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.456064 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.456461 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.457189 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.461544 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.464163 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.465366 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-89dlr"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.473212 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pwhxn"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.475320 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwjzc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.476471 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.476845 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.477454 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.477830 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-frkp7"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.478375 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.479517 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dgcks"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.480742 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.480301 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.481843 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvbrc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.483042 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rw85s"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.483130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.483744 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.483890 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.486363 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4bs6h"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.488282 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.489432 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.490496 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6txts"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492147 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq2lh\" (UniqueName: \"kubernetes.io/projected/a1bd7de0-543b-45cf-8ca8-b647d17671eb-kube-api-access-fq2lh\") pod \"control-plane-machine-set-operator-78cbb6b69f-5pg7p\" (UID: \"a1bd7de0-543b-45cf-8ca8-b647d17671eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dklk\" (UniqueName: \"kubernetes.io/projected/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-kube-api-access-8dklk\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2js\" (UniqueName: \"kubernetes.io/projected/435cf100-c5b6-4b1d-80a0-48b7a2688d72-kube-api-access-8b2js\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492415 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-etcd-client\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492518 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59z2\" (UniqueName: \"kubernetes.io/projected/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-kube-api-access-c59z2\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492593 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be72510-28ab-44e6-ae93-6930c521dc8d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-trusted-ca-bundle\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95z78\" (UniqueName: \"kubernetes.io/projected/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-kube-api-access-95z78\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.492918 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmwz\" (UniqueName: \"kubernetes.io/projected/91f8886f-babc-495e-86fb-475d8582d6ac-kube-api-access-5pmwz\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493000 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xj4b\" (UniqueName: \"kubernetes.io/projected/6716c3af-de13-4b1c-a2b6-ebb3b968c617-kube-api-access-9xj4b\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493043 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493305 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-serving-cert\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/089f2a06-1824-49c8-ad48-b119bf6a9d63-images\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493393 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgm5q\" (UniqueName: \"kubernetes.io/projected/a90d3758-749b-4327-877f-ecb89c49b5e0-kube-api-access-cgm5q\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493514 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-image-import-ca\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493562 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/52ec89ec-1759-416b-87c8-b90b4194a960-audit-dir\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493580 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f8886f-babc-495e-86fb-475d8582d6ac-trusted-ca\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493601 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-metrics-certs\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493618 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-config\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493635 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-default-certificate\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493658 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2cfad6-b1c8-46ee-8f79-6072ffb59471-serving-cert\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493698 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-etcd-service-ca\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493717 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-config\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493735 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-config\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493751 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-etcd-client\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493768 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90d3758-749b-4327-877f-ecb89c49b5e0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493788 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9106f3fb-f347-44c0-8343-c3096803e845-etcd-client\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493809 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493826 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493842 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493860 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-proxy-tls\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493886 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-dir\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493906 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/25efe6ba-5133-429c-9b89-63bdc857d930-metrics-tls\") pod \"dns-operator-744455d44c-d27vq\" (UID: \"25efe6ba-5133-429c-9b89-63bdc857d930\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c3695d-6228-4722-8394-a31ec8e7333c-serving-cert\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493944 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-serving-cert\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493966 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-serving-cert\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.493987 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.494061 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/089f2a06-1824-49c8-ad48-b119bf6a9d63-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.494530 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be72510-28ab-44e6-ae93-6930c521dc8d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.494667 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-dir\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b2c3695d-6228-4722-8394-a31ec8e7333c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kvj\" (UniqueName: \"kubernetes.io/projected/2372d9bc-babb-4932-8adb-a138b6c0ec28-kube-api-access-g5kvj\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495335 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-config\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495358 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllx2\" (UniqueName: \"kubernetes.io/projected/6be72510-28ab-44e6-ae93-6930c521dc8d-kube-api-access-jllx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495374 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495392 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-config\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495406 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495439 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495483 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495499 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a027437-d0c1-4d20-9209-ac4815006ed3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2372d9bc-babb-4932-8adb-a138b6c0ec28-trusted-ca\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495543 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-config\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495562 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-config\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495582 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495620 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-policies\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-client-ca\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495655 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3e037e-4dc7-4240-b54f-20931407f4a3-service-ca-bundle\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495682 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-client-ca\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495701 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2372d9bc-babb-4932-8adb-a138b6c0ec28-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495720 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91f8886f-babc-495e-86fb-475d8582d6ac-serving-cert\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495736 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21cf889c-d5ea-4188-95cf-46ee6f626ff1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495757 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-service-ca\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495794 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1df1afb5-9ca0-4b18-93a7-80381e175ec4-srv-cert\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.496055 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-trusted-ca-bundle\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.496996 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-config\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497041 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gglfz"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497064 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lgkf4"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.495811 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-oauth-config\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497377 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-service-ca-bundle\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9106f3fb-f347-44c0-8343-c3096803e845-serving-cert\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92x29\" (UniqueName: \"kubernetes.io/projected/9106f3fb-f347-44c0-8343-c3096803e845-kube-api-access-92x29\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497452 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497496 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvz49\" (UniqueName: \"kubernetes.io/projected/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-kube-api-access-vvz49\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497514 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb89\" (UniqueName: \"kubernetes.io/projected/1df1afb5-9ca0-4b18-93a7-80381e175ec4-kube-api-access-hhb89\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497530 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a027437-d0c1-4d20-9209-ac4815006ed3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497551 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-etcd-serving-ca\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497569 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a90d3758-749b-4327-877f-ecb89c49b5e0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497595 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/089f2a06-1824-49c8-ad48-b119bf6a9d63-proxy-tls\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497698 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497755 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a027437-d0c1-4d20-9209-ac4815006ed3-config\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2372d9bc-babb-4932-8adb-a138b6c0ec28-metrics-tls\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497795 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smkr\" (UniqueName: \"kubernetes.io/projected/ed3e037e-4dc7-4240-b54f-20931407f4a3-kube-api-access-8smkr\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497817 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497881 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6be72510-28ab-44e6-ae93-6930c521dc8d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497934 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497970 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6716c3af-de13-4b1c-a2b6-ebb3b968c617-audit-dir\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.497997 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/435cf100-c5b6-4b1d-80a0-48b7a2688d72-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498015 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8wj\" (UniqueName: \"kubernetes.io/projected/59847917-735f-49c7-99b2-599facec7e03-kube-api-access-jg8wj\") pod \"downloads-7954f5f757-rw85s\" (UID: \"59847917-735f-49c7-99b2-599facec7e03\") " pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498031 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-audit-policies\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498049 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21cf889c-d5ea-4188-95cf-46ee6f626ff1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498066 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1df1afb5-9ca0-4b18-93a7-80381e175ec4-profile-collector-cert\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498081 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-stats-auth\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-config\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498101 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/435cf100-c5b6-4b1d-80a0-48b7a2688d72-images\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498146 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-config\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498173 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498199 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e563434-f9d2-4932-a791-cfffe2de6e5b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hj79t\" (UID: \"3e563434-f9d2-4932-a791-cfffe2de6e5b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498224 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w8pm\" (UniqueName: \"kubernetes.io/projected/089f2a06-1824-49c8-ad48-b119bf6a9d63-kube-api-access-7w8pm\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498249 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ec89ec-1759-416b-87c8-b90b4194a960-node-pullsecrets\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498274 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-encryption-config\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498301 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-oauth-serving-cert\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpg9h\" (UniqueName: \"kubernetes.io/projected/21cf889c-d5ea-4188-95cf-46ee6f626ff1-kube-api-access-hpg9h\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498351 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7fcj\" (UniqueName: \"kubernetes.io/projected/aad5d49e-b605-4151-a30d-1db9ffa9b99b-kube-api-access-w7fcj\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498373 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498396 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1bd7de0-543b-45cf-8ca8-b647d17671eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5pg7p\" (UID: \"a1bd7de0-543b-45cf-8ca8-b647d17671eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.498751 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/435cf100-c5b6-4b1d-80a0-48b7a2688d72-images\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.499285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2cfad6-b1c8-46ee-8f79-6072ffb59471-serving-cert\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500179 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500479 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8dff\" (UniqueName: \"kubernetes.io/projected/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-kube-api-access-t8dff\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500536 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c66ae1-c442-4718-8663-6934eb402aea-serving-cert\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500563 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-serving-cert\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-machine-approver-tls\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbf7\" (UniqueName: \"kubernetes.io/projected/3e563434-f9d2-4932-a791-cfffe2de6e5b-kube-api-access-nvbf7\") pod \"cluster-samples-operator-665b6dd947-hj79t\" (UID: \"3e563434-f9d2-4932-a791-cfffe2de6e5b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500646 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-audit\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jsch\" (UniqueName: \"kubernetes.io/projected/47c66ae1-c442-4718-8663-6934eb402aea-kube-api-access-4jsch\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500703 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-etcd-ca\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500732 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f8886f-babc-495e-86fb-475d8582d6ac-config\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-encryption-config\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500789 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzcs9\" (UniqueName: \"kubernetes.io/projected/25efe6ba-5133-429c-9b89-63bdc857d930-kube-api-access-nzcs9\") pod \"dns-operator-744455d44c-d27vq\" (UID: \"25efe6ba-5133-429c-9b89-63bdc857d930\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500818 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-etcd-client\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500823 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500890 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-config\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500916 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500951 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2bz\" (UniqueName: \"kubernetes.io/projected/52ec89ec-1759-416b-87c8-b90b4194a960-kube-api-access-tl2bz\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.500999 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cf100-c5b6-4b1d-80a0-48b7a2688d72-config\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flll9\" (UniqueName: \"kubernetes.io/projected/b2c3695d-6228-4722-8394-a31ec8e7333c-kube-api-access-flll9\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501043 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx4q\" (UniqueName: \"kubernetes.io/projected/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-kube-api-access-clx4q\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501065 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501088 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501109 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aad5d49e-b605-4151-a30d-1db9ffa9b99b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501135 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501155 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67mz\" (UniqueName: \"kubernetes.io/projected/f3bb6cb9-0c94-47e3-a796-dca0ba8491ae-kube-api-access-j67mz\") pod \"migrator-59844c95c7-7wkxt\" (UID: \"f3bb6cb9-0c94-47e3-a796-dca0ba8491ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501224 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48gh\" (UniqueName: \"kubernetes.io/projected/db2cfad6-b1c8-46ee-8f79-6072ffb59471-kube-api-access-n48gh\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501244 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-auth-proxy-config\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501388 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.501690 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.502132 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4hktn"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.502170 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.502136 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.502399 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-oauth-serving-cert\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.502720 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6716c3af-de13-4b1c-a2b6-ebb3b968c617-audit-dir\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.503035 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-oauth-config\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.503520 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c66ae1-c442-4718-8663-6934eb402aea-service-ca-bundle\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.503528 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.503848 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-serving-cert\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.503941 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.504669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-client-ca\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.505156 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c66ae1-c442-4718-8663-6934eb402aea-serving-cert\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.505678 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-config\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.505851 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e563434-f9d2-4932-a791-cfffe2de6e5b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hj79t\" (UID: \"3e563434-f9d2-4932-a791-cfffe2de6e5b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.505925 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.506285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-policies\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.506633 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.506698 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.506684 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.506996 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-client-ca\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.507591 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-audit-policies\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.508082 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-serving-cert\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.508352 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.508820 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6be72510-28ab-44e6-ae93-6930c521dc8d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.508842 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-config\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.509034 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-auth-proxy-config\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.509256 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-machine-approver-tls\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.509395 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-service-ca\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.509684 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.509765 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.509878 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6716c3af-de13-4b1c-a2b6-ebb3b968c617-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510014 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d27vq"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510209 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510684 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510704 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510711 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6716c3af-de13-4b1c-a2b6-ebb3b968c617-encryption-config\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cf100-c5b6-4b1d-80a0-48b7a2688d72-config\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.510861 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.511344 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.512390 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/435cf100-c5b6-4b1d-80a0-48b7a2688d72-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.512606 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.512856 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.513860 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rpbx6"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.515110 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.516441 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.518154 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.518852 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-48755"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.518994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.521125 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.522630 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.523721 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvbrc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.525542 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.526214 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.529041 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dgcks"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.529067 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6mmf"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.530100 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-serving-cert\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.530192 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9wmt2"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.532069 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.537936 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-llq4c"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.539695 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.539827 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.540288 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.546990 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.550038 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwjzc"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.554312 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-llq4c"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.556026 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9wmt2"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.557386 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-frkp7"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.560118 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-x2c2k"] Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.560880 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.563545 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.583814 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.601805 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-serving-cert\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.601952 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kvj\" (UniqueName: \"kubernetes.io/projected/2372d9bc-babb-4932-8adb-a138b6c0ec28-kube-api-access-g5kvj\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602049 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/089f2a06-1824-49c8-ad48-b119bf6a9d63-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602127 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b2c3695d-6228-4722-8394-a31ec8e7333c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602211 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-config\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602310 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602409 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602497 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-config\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602590 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a027437-d0c1-4d20-9209-ac4815006ed3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2372d9bc-babb-4932-8adb-a138b6c0ec28-trusted-ca\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602755 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602826 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3e037e-4dc7-4240-b54f-20931407f4a3-service-ca-bundle\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603108 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2372d9bc-babb-4932-8adb-a138b6c0ec28-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603184 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91f8886f-babc-495e-86fb-475d8582d6ac-serving-cert\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.602595 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b2c3695d-6228-4722-8394-a31ec8e7333c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21cf889c-d5ea-4188-95cf-46ee6f626ff1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603370 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/089f2a06-1824-49c8-ad48-b119bf6a9d63-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603459 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-config\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603412 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1df1afb5-9ca0-4b18-93a7-80381e175ec4-srv-cert\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603602 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9106f3fb-f347-44c0-8343-c3096803e845-serving-cert\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603674 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92x29\" (UniqueName: \"kubernetes.io/projected/9106f3fb-f347-44c0-8343-c3096803e845-kube-api-access-92x29\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603749 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhb89\" (UniqueName: \"kubernetes.io/projected/1df1afb5-9ca0-4b18-93a7-80381e175ec4-kube-api-access-hhb89\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603820 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a027437-d0c1-4d20-9209-ac4815006ed3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-etcd-serving-ca\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.603961 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a90d3758-749b-4327-877f-ecb89c49b5e0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604038 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604109 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a027437-d0c1-4d20-9209-ac4815006ed3-config\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/089f2a06-1824-49c8-ad48-b119bf6a9d63-proxy-tls\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2372d9bc-babb-4932-8adb-a138b6c0ec28-metrics-tls\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604341 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8smkr\" (UniqueName: \"kubernetes.io/projected/ed3e037e-4dc7-4240-b54f-20931407f4a3-kube-api-access-8smkr\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604451 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-stats-auth\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604528 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21cf889c-d5ea-4188-95cf-46ee6f626ff1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604609 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1df1afb5-9ca0-4b18-93a7-80381e175ec4-profile-collector-cert\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604645 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w8pm\" (UniqueName: \"kubernetes.io/projected/089f2a06-1824-49c8-ad48-b119bf6a9d63-kube-api-access-7w8pm\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ec89ec-1759-416b-87c8-b90b4194a960-node-pullsecrets\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604699 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-encryption-config\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604723 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpg9h\" (UniqueName: \"kubernetes.io/projected/21cf889c-d5ea-4188-95cf-46ee6f626ff1-kube-api-access-hpg9h\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604758 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7fcj\" (UniqueName: \"kubernetes.io/projected/aad5d49e-b605-4151-a30d-1db9ffa9b99b-kube-api-access-w7fcj\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604788 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1bd7de0-543b-45cf-8ca8-b647d17671eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5pg7p\" (UID: \"a1bd7de0-543b-45cf-8ca8-b647d17671eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604863 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-audit\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604915 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-etcd-ca\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604946 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f8886f-babc-495e-86fb-475d8582d6ac-config\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604976 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzcs9\" (UniqueName: \"kubernetes.io/projected/25efe6ba-5133-429c-9b89-63bdc857d930-kube-api-access-nzcs9\") pod \"dns-operator-744455d44c-d27vq\" (UID: \"25efe6ba-5133-429c-9b89-63bdc857d930\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605014 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl2bz\" (UniqueName: \"kubernetes.io/projected/52ec89ec-1759-416b-87c8-b90b4194a960-kube-api-access-tl2bz\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605051 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flll9\" (UniqueName: \"kubernetes.io/projected/b2c3695d-6228-4722-8394-a31ec8e7333c-kube-api-access-flll9\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605061 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-etcd-serving-ca\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605068 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ec89ec-1759-416b-87c8-b90b4194a960-node-pullsecrets\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605116 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx4q\" (UniqueName: \"kubernetes.io/projected/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-kube-api-access-clx4q\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aad5d49e-b605-4151-a30d-1db9ffa9b99b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605169 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67mz\" (UniqueName: \"kubernetes.io/projected/f3bb6cb9-0c94-47e3-a796-dca0ba8491ae-kube-api-access-j67mz\") pod \"migrator-59844c95c7-7wkxt\" (UID: \"f3bb6cb9-0c94-47e3-a796-dca0ba8491ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.604458 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605201 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq2lh\" (UniqueName: \"kubernetes.io/projected/a1bd7de0-543b-45cf-8ca8-b647d17671eb-kube-api-access-fq2lh\") pod \"control-plane-machine-set-operator-78cbb6b69f-5pg7p\" (UID: \"a1bd7de0-543b-45cf-8ca8-b647d17671eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605238 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-etcd-client\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605259 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmwz\" (UniqueName: \"kubernetes.io/projected/91f8886f-babc-495e-86fb-475d8582d6ac-kube-api-access-5pmwz\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605296 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/089f2a06-1824-49c8-ad48-b119bf6a9d63-images\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgm5q\" (UniqueName: \"kubernetes.io/projected/a90d3758-749b-4327-877f-ecb89c49b5e0-kube-api-access-cgm5q\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605336 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-image-import-ca\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605358 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/52ec89ec-1759-416b-87c8-b90b4194a960-audit-dir\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605536 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f8886f-babc-495e-86fb-475d8582d6ac-trusted-ca\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605569 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-metrics-certs\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605596 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-config\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-default-certificate\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-etcd-service-ca\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90d3758-749b-4327-877f-ecb89c49b5e0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605702 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9106f3fb-f347-44c0-8343-c3096803e845-etcd-client\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605725 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-proxy-tls\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605790 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/25efe6ba-5133-429c-9b89-63bdc857d930-metrics-tls\") pod \"dns-operator-744455d44c-d27vq\" (UID: \"25efe6ba-5133-429c-9b89-63bdc857d930\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605817 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c3695d-6228-4722-8394-a31ec8e7333c-serving-cert\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.605913 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-audit\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.606337 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a027437-d0c1-4d20-9209-ac4815006ed3-config\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.606536 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/52ec89ec-1759-416b-87c8-b90b4194a960-audit-dir\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.606674 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.606773 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f8886f-babc-495e-86fb-475d8582d6ac-config\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.607143 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21cf889c-d5ea-4188-95cf-46ee6f626ff1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.607483 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91f8886f-babc-495e-86fb-475d8582d6ac-trusted-ca\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.607991 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/52ec89ec-1759-416b-87c8-b90b4194a960-image-import-ca\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.608453 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-encryption-config\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.608748 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91f8886f-babc-495e-86fb-475d8582d6ac-serving-cert\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.609279 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a027437-d0c1-4d20-9209-ac4815006ed3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.610255 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-serving-cert\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.610411 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c3695d-6228-4722-8394-a31ec8e7333c-serving-cert\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.610508 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21cf889c-d5ea-4188-95cf-46ee6f626ff1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.610807 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2372d9bc-babb-4932-8adb-a138b6c0ec28-metrics-tls\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.611199 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52ec89ec-1759-416b-87c8-b90b4194a960-etcd-client\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.630316 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.634864 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2372d9bc-babb-4932-8adb-a138b6c0ec28-trusted-ca\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.644627 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.675945 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.687378 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.698030 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a90d3758-749b-4327-877f-ecb89c49b5e0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.703545 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.707198 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90d3758-749b-4327-877f-ecb89c49b5e0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.723711 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.743292 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.763159 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.784229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.788514 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9106f3fb-f347-44c0-8343-c3096803e845-serving-cert\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.804229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.811372 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9106f3fb-f347-44c0-8343-c3096803e845-etcd-client\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.823441 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.835914 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-config\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.843592 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.846377 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-etcd-ca\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.864321 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.867807 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9106f3fb-f347-44c0-8343-c3096803e845-etcd-service-ca\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.885790 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.905106 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.923896 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.944102 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.965060 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 13:12:12 crc kubenswrapper[4861]: I0219 13:12:12.984986 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.000564 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.004505 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.007036 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-config\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.023596 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.043830 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.063706 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.084035 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.104480 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.110215 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/25efe6ba-5133-429c-9b89-63bdc857d930-metrics-tls\") pod \"dns-operator-744455d44c-d27vq\" (UID: \"25efe6ba-5133-429c-9b89-63bdc857d930\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.124619 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.143822 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.163996 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.171371 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.187919 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.195867 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.204102 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.224153 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.233985 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-proxy-tls\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.243682 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.263248 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.285145 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.299931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1df1afb5-9ca0-4b18-93a7-80381e175ec4-srv-cert\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.304058 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.309866 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1df1afb5-9ca0-4b18-93a7-80381e175ec4-profile-collector-cert\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.323929 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.344851 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.363561 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.384481 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.393687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-default-certificate\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.405138 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.420766 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-metrics-certs\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.421880 4861 request.go:700] Waited for 1.019593898s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.423888 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.430475 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ed3e037e-4dc7-4240-b54f-20931407f4a3-stats-auth\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.444853 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.454741 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3e037e-4dc7-4240-b54f-20931407f4a3-service-ca-bundle\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.464219 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.484195 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.504338 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.507040 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/089f2a06-1824-49c8-ad48-b119bf6a9d63-images\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.523284 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.543046 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.549394 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/089f2a06-1824-49c8-ad48-b119bf6a9d63-proxy-tls\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.563454 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.584022 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.603549 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 13:12:13 crc kubenswrapper[4861]: E0219 13:12:13.608410 4861 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 13:12:13 crc kubenswrapper[4861]: E0219 13:12:13.608505 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aad5d49e-b605-4151-a30d-1db9ffa9b99b-package-server-manager-serving-cert podName:aad5d49e-b605-4151-a30d-1db9ffa9b99b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:14.108483337 +0000 UTC m=+148.769586575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/aad5d49e-b605-4151-a30d-1db9ffa9b99b-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-br4gr" (UID: "aad5d49e-b605-4151-a30d-1db9ffa9b99b") : failed to sync secret cache: timed out waiting for the condition Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.612167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1bd7de0-543b-45cf-8ca8-b647d17671eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5pg7p\" (UID: \"a1bd7de0-543b-45cf-8ca8-b647d17671eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.623193 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.643899 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.703171 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.723734 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.743076 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.763661 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.784042 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.813151 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.824218 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.844272 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.864198 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.884606 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.903163 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.923035 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.943220 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.962788 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 13:12:13 crc kubenswrapper[4861]: I0219 13:12:13.983973 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.004105 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.023142 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.043550 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.063495 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.083924 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.103884 4861 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.124175 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.128615 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aad5d49e-b605-4151-a30d-1db9ffa9b99b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.134168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aad5d49e-b605-4151-a30d-1db9ffa9b99b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.160297 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dklk\" (UniqueName: \"kubernetes.io/projected/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-kube-api-access-8dklk\") pod \"route-controller-manager-6576b87f9c-45sw8\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.177016 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2js\" (UniqueName: \"kubernetes.io/projected/435cf100-c5b6-4b1d-80a0-48b7a2688d72-kube-api-access-8b2js\") pod \"machine-api-operator-5694c8668f-89dlr\" (UID: \"435cf100-c5b6-4b1d-80a0-48b7a2688d72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.197123 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59z2\" (UniqueName: \"kubernetes.io/projected/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-kube-api-access-c59z2\") pod \"console-f9d7485db-4bs6h\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.225368 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95z78\" (UniqueName: \"kubernetes.io/projected/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-kube-api-access-95z78\") pod \"oauth-openshift-558db77b4-6txts\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.228113 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.249057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xj4b\" (UniqueName: \"kubernetes.io/projected/6716c3af-de13-4b1c-a2b6-ebb3b968c617-kube-api-access-9xj4b\") pod \"apiserver-7bbb656c7d-48755\" (UID: \"6716c3af-de13-4b1c-a2b6-ebb3b968c617\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.255266 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.257569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8dff\" (UniqueName: \"kubernetes.io/projected/bf366b45-962f-48b9-92e0-7a0c6ee35c7a-kube-api-access-t8dff\") pod \"machine-approver-56656f9798-mk9tc\" (UID: \"bf366b45-962f-48b9-92e0-7a0c6ee35c7a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.279516 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.284049 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.288694 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.302784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvz49\" (UniqueName: \"kubernetes.io/projected/88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6-kube-api-access-vvz49\") pod \"cluster-image-registry-operator-dc59b4c8b-mzhlv\" (UID: \"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.321868 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jsch\" (UniqueName: \"kubernetes.io/projected/47c66ae1-c442-4718-8663-6934eb402aea-kube-api-access-4jsch\") pod \"authentication-operator-69f744f599-pwhxn\" (UID: \"47c66ae1-c442-4718-8663-6934eb402aea\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.349103 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllx2\" (UniqueName: \"kubernetes.io/projected/6be72510-28ab-44e6-ae93-6930c521dc8d-kube-api-access-jllx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-tf29c\" (UID: \"6be72510-28ab-44e6-ae93-6930c521dc8d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.359566 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbf7\" (UniqueName: \"kubernetes.io/projected/3e563434-f9d2-4932-a791-cfffe2de6e5b-kube-api-access-nvbf7\") pod \"cluster-samples-operator-665b6dd947-hj79t\" (UID: \"3e563434-f9d2-4932-a791-cfffe2de6e5b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.380400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8wj\" (UniqueName: \"kubernetes.io/projected/59847917-735f-49c7-99b2-599facec7e03-kube-api-access-jg8wj\") pod \"downloads-7954f5f757-rw85s\" (UID: \"59847917-735f-49c7-99b2-599facec7e03\") " pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.402739 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.404020 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48gh\" (UniqueName: \"kubernetes.io/projected/db2cfad6-b1c8-46ee-8f79-6072ffb59471-kube-api-access-n48gh\") pod \"controller-manager-879f6c89f-2d8vs\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.423056 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.430670 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.441873 4861 request.go:700] Waited for 1.901175607s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.443316 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.462859 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-89dlr"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.464080 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.464462 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.487078 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.492846 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.505740 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6txts"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.506100 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.507073 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.523230 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.537525 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.544634 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-48755"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.547342 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 13:12:14 crc kubenswrapper[4861]: W0219 13:12:14.556191 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6716c3af_de13_4b1c_a2b6_ebb3b968c617.slice/crio-225a6a4fae1f7984e0b0fcf30ebb1adb807abed18c39c44435498e86ec0af562 WatchSource:0}: Error finding container 225a6a4fae1f7984e0b0fcf30ebb1adb807abed18c39c44435498e86ec0af562: Status 404 returned error can't find the container with id 225a6a4fae1f7984e0b0fcf30ebb1adb807abed18c39c44435498e86ec0af562 Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.563265 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.567040 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.588565 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.596874 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.626964 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kvj\" (UniqueName: \"kubernetes.io/projected/2372d9bc-babb-4932-8adb-a138b6c0ec28-kube-api-access-g5kvj\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.627895 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.652492 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d3139b4-07d9-43eb-b9da-31d2c9218ab9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m7bvc\" (UID: \"8d3139b4-07d9-43eb-b9da-31d2c9218ab9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.662374 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2372d9bc-babb-4932-8adb-a138b6c0ec28-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lwqfx\" (UID: \"2372d9bc-babb-4932-8adb-a138b6c0ec28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:14 crc kubenswrapper[4861]: W0219 13:12:14.671898 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186eb49d_44fd_4ea5_bd44_c6cf83a03fc5.slice/crio-db2def281b1a90bf549feb13e850c9ac987aeaceb93ee34bac0738d9a528600f WatchSource:0}: Error finding container db2def281b1a90bf549feb13e850c9ac987aeaceb93ee34bac0738d9a528600f: Status 404 returned error can't find the container with id db2def281b1a90bf549feb13e850c9ac987aeaceb93ee34bac0738d9a528600f Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.686041 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhb89\" (UniqueName: \"kubernetes.io/projected/1df1afb5-9ca0-4b18-93a7-80381e175ec4-kube-api-access-hhb89\") pod \"catalog-operator-68c6474976-fn4fs\" (UID: \"1df1afb5-9ca0-4b18-93a7-80381e175ec4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.693742 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.700351 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.704451 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92x29\" (UniqueName: \"kubernetes.io/projected/9106f3fb-f347-44c0-8343-c3096803e845-kube-api-access-92x29\") pod \"etcd-operator-b45778765-4hktn\" (UID: \"9106f3fb-f347-44c0-8343-c3096803e845\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.719731 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a027437-d0c1-4d20-9209-ac4815006ed3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t62dj\" (UID: \"5a027437-d0c1-4d20-9209-ac4815006ed3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.739124 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.739590 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.741846 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpg9h\" (UniqueName: \"kubernetes.io/projected/21cf889c-d5ea-4188-95cf-46ee6f626ff1-kube-api-access-hpg9h\") pod \"openshift-apiserver-operator-796bbdcf4f-tdpbv\" (UID: \"21cf889c-d5ea-4188-95cf-46ee6f626ff1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.768351 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w8pm\" (UniqueName: \"kubernetes.io/projected/089f2a06-1824-49c8-ad48-b119bf6a9d63-kube-api-access-7w8pm\") pod \"machine-config-operator-74547568cd-8jrlx\" (UID: \"089f2a06-1824-49c8-ad48-b119bf6a9d63\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.783908 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.789183 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.789593 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8smkr\" (UniqueName: \"kubernetes.io/projected/ed3e037e-4dc7-4240-b54f-20931407f4a3-kube-api-access-8smkr\") pod \"router-default-5444994796-9zmzh\" (UID: \"ed3e037e-4dc7-4240-b54f-20931407f4a3\") " pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.790925 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4bs6h"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.800889 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.807617 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.805761 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7fcj\" (UniqueName: \"kubernetes.io/projected/aad5d49e-b605-4151-a30d-1db9ffa9b99b-kube-api-access-w7fcj\") pod \"package-server-manager-789f6589d5-br4gr\" (UID: \"aad5d49e-b605-4151-a30d-1db9ffa9b99b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.813180 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.822539 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl2bz\" (UniqueName: \"kubernetes.io/projected/52ec89ec-1759-416b-87c8-b90b4194a960-kube-api-access-tl2bz\") pod \"apiserver-76f77b778f-lgkf4\" (UID: \"52ec89ec-1759-416b-87c8-b90b4194a960\") " pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:14 crc kubenswrapper[4861]: W0219 13:12:14.824785 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88bd7f5e_efc9_4ee9_88fd_d474e17c3bd6.slice/crio-87f3202e597c0cc100fac2f237583882161b508d0e26a4488ec6495c312a5f13 WatchSource:0}: Error finding container 87f3202e597c0cc100fac2f237583882161b508d0e26a4488ec6495c312a5f13: Status 404 returned error can't find the container with id 87f3202e597c0cc100fac2f237583882161b508d0e26a4488ec6495c312a5f13 Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.831720 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pwhxn"] Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.834571 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.838325 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmwz\" (UniqueName: \"kubernetes.io/projected/91f8886f-babc-495e-86fb-475d8582d6ac-kube-api-access-5pmwz\") pod \"console-operator-58897d9998-gglfz\" (UID: \"91f8886f-babc-495e-86fb-475d8582d6ac\") " pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.870105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67mz\" (UniqueName: \"kubernetes.io/projected/f3bb6cb9-0c94-47e3-a796-dca0ba8491ae-kube-api-access-j67mz\") pod \"migrator-59844c95c7-7wkxt\" (UID: \"f3bb6cb9-0c94-47e3-a796-dca0ba8491ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.891098 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq2lh\" (UniqueName: \"kubernetes.io/projected/a1bd7de0-543b-45cf-8ca8-b647d17671eb-kube-api-access-fq2lh\") pod \"control-plane-machine-set-operator-78cbb6b69f-5pg7p\" (UID: \"a1bd7de0-543b-45cf-8ca8-b647d17671eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.905343 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzcs9\" (UniqueName: \"kubernetes.io/projected/25efe6ba-5133-429c-9b89-63bdc857d930-kube-api-access-nzcs9\") pod \"dns-operator-744455d44c-d27vq\" (UID: \"25efe6ba-5133-429c-9b89-63bdc857d930\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.923773 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d093c48-67cb-4e6d-99bb-3c8df32c6f15-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cbzz2\" (UID: \"1d093c48-67cb-4e6d-99bb-3c8df32c6f15\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.946226 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx4q\" (UniqueName: \"kubernetes.io/projected/eb6e1d5f-e904-4931-9d35-e1b4b1a30361-kube-api-access-clx4q\") pod \"machine-config-controller-84d6567774-4cjxr\" (UID: \"eb6e1d5f-e904-4931-9d35-e1b4b1a30361\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.962886 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.964018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flll9\" (UniqueName: \"kubernetes.io/projected/b2c3695d-6228-4722-8394-a31ec8e7333c-kube-api-access-flll9\") pod \"openshift-config-operator-7777fb866f-4cdd9\" (UID: \"b2c3695d-6228-4722-8394-a31ec8e7333c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.970504 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.982935 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.983553 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.988122 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgm5q\" (UniqueName: \"kubernetes.io/projected/a90d3758-749b-4327-877f-ecb89c49b5e0-kube-api-access-cgm5q\") pod \"kube-storage-version-migrator-operator-b67b599dd-szdk4\" (UID: \"a90d3758-749b-4327-877f-ecb89c49b5e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:14 crc kubenswrapper[4861]: I0219 13:12:14.995674 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.006398 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsknd\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-kube-api-access-jsknd\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048814 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7425897a-821a-4293-8d9c-3b0c5744bbc9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048839 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-bound-sa-token\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048897 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7425897a-821a-4293-8d9c-3b0c5744bbc9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048926 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-certificates\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048971 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.048989 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7spb\" (UniqueName: \"kubernetes.io/projected/6502d829-f644-4207-823f-2ca6a0d682aa-kube-api-access-r7spb\") pod \"multus-admission-controller-857f4d67dd-rpbx6\" (UID: \"6502d829-f644-4207-823f-2ca6a0d682aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.049019 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-tls\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.049037 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6502d829-f644-4207-823f-2ca6a0d682aa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rpbx6\" (UID: \"6502d829-f644-4207-823f-2ca6a0d682aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.049053 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-trusted-ca\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.049623 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:15.549605267 +0000 UTC m=+150.210708565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.053146 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.078360 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.096465 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.115275 4861 generic.go:334] "Generic (PLEG): container finished" podID="6716c3af-de13-4b1c-a2b6-ebb3b968c617" containerID="fa99e229a61c8653c70a031a346c9bd738848fc32ce36579643d1dbb12c68a2a" exitCode=0 Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.115619 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" event={"ID":"6716c3af-de13-4b1c-a2b6-ebb3b968c617","Type":"ContainerDied","Data":"fa99e229a61c8653c70a031a346c9bd738848fc32ce36579643d1dbb12c68a2a"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.115643 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" event={"ID":"6716c3af-de13-4b1c-a2b6-ebb3b968c617","Type":"ContainerStarted","Data":"225a6a4fae1f7984e0b0fcf30ebb1adb807abed18c39c44435498e86ec0af562"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.129456 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.135870 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" event={"ID":"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5","Type":"ContainerStarted","Data":"5001fa248991aa88551b064bc544b7fcbded844ee0b56168dac1685d46b4716c"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.135916 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" event={"ID":"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5","Type":"ContainerStarted","Data":"db2def281b1a90bf549feb13e850c9ac987aeaceb93ee34bac0738d9a528600f"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.136261 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.145494 4861 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-45sw8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.145685 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" podUID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.145952 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rw85s"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.146660 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bs6h" event={"ID":"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b","Type":"ContainerStarted","Data":"2a8432bb8a9eadcdce87c07656bc18e3212bf2c2a39484babde5f35eeec1e761"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.149097 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" event={"ID":"abc01b0c-a636-4dc5-bf6d-d2efde512ed5","Type":"ContainerStarted","Data":"7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.149121 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" event={"ID":"abc01b0c-a636-4dc5-bf6d-d2efde512ed5","Type":"ContainerStarted","Data":"35f478897c5f47851ae7f9bb19f7b827a062c13015d12cdd61603e8ea6e0e3a9"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.149462 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.149592 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.149759 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60256506-33fa-4551-bbb5-851b8679cf93-apiservice-cert\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.149822 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:15.649801963 +0000 UTC m=+150.310905191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.149872 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7spb\" (UniqueName: \"kubernetes.io/projected/6502d829-f644-4207-823f-2ca6a0d682aa-kube-api-access-r7spb\") pod \"multus-admission-controller-857f4d67dd-rpbx6\" (UID: \"6502d829-f644-4207-823f-2ca6a0d682aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.150045 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gvxh\" (UniqueName: \"kubernetes.io/projected/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-kube-api-access-6gvxh\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.150079 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ac67495b-dbbc-4e89-af14-334d26b5dc5a-node-bootstrap-token\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.150489 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrnt\" (UniqueName: \"kubernetes.io/projected/22e25f65-4f88-4385-bfed-a95ab8a662cf-kube-api-access-9jrnt\") pod \"ingress-canary-llq4c\" (UID: \"22e25f65-4f88-4385-bfed-a95ab8a662cf\") " pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154271 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-tls\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154312 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6502d829-f644-4207-823f-2ca6a0d682aa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rpbx6\" (UID: \"6502d829-f644-4207-823f-2ca6a0d682aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154347 4861 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6txts container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154390 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" podUID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154353 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-trusted-ca\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154495 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/60256506-33fa-4551-bbb5-851b8679cf93-tmpfs\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154538 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsknd\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-kube-api-access-jsknd\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154793 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ac67495b-dbbc-4e89-af14-334d26b5dc5a-certs\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154884 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154936 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxmt\" (UniqueName: \"kubernetes.io/projected/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-kube-api-access-7xxmt\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.154956 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-signing-cabundle\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155017 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4fb02ce1-52b1-451d-8e75-25a71830b204-srv-cert\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155067 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4fb02ce1-52b1-451d-8e75-25a71830b204-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155085 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4xjs\" (UniqueName: \"kubernetes.io/projected/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-kube-api-access-k4xjs\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155123 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7425897a-821a-4293-8d9c-3b0c5744bbc9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155184 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmpq\" (UniqueName: \"kubernetes.io/projected/25018c53-6299-4fab-bf5e-819ba4f84596-kube-api-access-gfmpq\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155223 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25018c53-6299-4fab-bf5e-819ba4f84596-secret-volume\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.155767 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-trusted-ca\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.156530 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9zmzh" event={"ID":"ed3e037e-4dc7-4240-b54f-20931407f4a3","Type":"ContainerStarted","Data":"38c45ec0b4ad02b4db2bcd15b14d7c3eab01dd5b1b0e9f5aedc912b0a196c679"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.158548 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkg9b\" (UniqueName: \"kubernetes.io/projected/ac67495b-dbbc-4e89-af14-334d26b5dc5a-kube-api-access-nkg9b\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.158710 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22e25f65-4f88-4385-bfed-a95ab8a662cf-cert\") pod \"ingress-canary-llq4c\" (UID: \"22e25f65-4f88-4385-bfed-a95ab8a662cf\") " pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.159018 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d18d0b-96e1-4df3-a714-c30515624398-serving-cert\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.159124 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-config-volume\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.159696 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-metrics-tls\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.161082 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-bound-sa-token\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.161967 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7425897a-821a-4293-8d9c-3b0c5744bbc9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.161505 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-mountpoint-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.163212 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60256506-33fa-4551-bbb5-851b8679cf93-webhook-cert\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.163289 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-csi-data-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.164063 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6502d829-f644-4207-823f-2ca6a0d682aa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rpbx6\" (UID: \"6502d829-f644-4207-823f-2ca6a0d682aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.164564 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdx8j\" (UniqueName: \"kubernetes.io/projected/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-kube-api-access-wdx8j\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.165387 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-tls\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.166517 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d18d0b-96e1-4df3-a714-c30515624398-config\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.166641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-socket-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.166940 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-plugins-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.167090 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bf8\" (UniqueName: \"kubernetes.io/projected/05d18d0b-96e1-4df3-a714-c30515624398-kube-api-access-p2bf8\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.167290 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25018c53-6299-4fab-bf5e-819ba4f84596-config-volume\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.167603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" event={"ID":"47c66ae1-c442-4718-8663-6934eb402aea","Type":"ContainerStarted","Data":"49e6e5d2011f4691fed6316d39ce9f00cd0caa6a955d0237a00f8d9a9c818ba5"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.167746 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7425897a-821a-4293-8d9c-3b0c5744bbc9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.167801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjnm\" (UniqueName: \"kubernetes.io/projected/4fb02ce1-52b1-451d-8e75-25a71830b204-kube-api-access-pmjnm\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.168339 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tphtn\" (UniqueName: \"kubernetes.io/projected/60256506-33fa-4551-bbb5-851b8679cf93-kube-api-access-tphtn\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.168369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-signing-key\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.168434 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-registration-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.168518 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-certificates\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.169378 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.171285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-certificates\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.172219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" event={"ID":"6be72510-28ab-44e6-ae93-6930c521dc8d","Type":"ContainerStarted","Data":"35aefc09bda58220033617ef8ba657bf9e02254bfd52cae71a6dc2d29d5d64ad"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.174494 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" event={"ID":"bf366b45-962f-48b9-92e0-7a0c6ee35c7a","Type":"ContainerStarted","Data":"955dc87edd8e2eb3da0f09cd220ea949f33e4c927c8c06bc6f4c77f7511ccb62"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.174581 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" event={"ID":"bf366b45-962f-48b9-92e0-7a0c6ee35c7a","Type":"ContainerStarted","Data":"ec0b0554310806edcc49314d280ca12b8995f583f92530f918ce4b424dc219f4"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.176114 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7425897a-821a-4293-8d9c-3b0c5744bbc9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.178589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsknd\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-kube-api-access-jsknd\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.180332 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" event={"ID":"435cf100-c5b6-4b1d-80a0-48b7a2688d72","Type":"ContainerStarted","Data":"95341e1b895d84dc604d13e54f639174fe419a876feae9689c60622c6ba2f05a"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.180358 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" event={"ID":"435cf100-c5b6-4b1d-80a0-48b7a2688d72","Type":"ContainerStarted","Data":"57ec789a8b331380d42a7c007566fc14274b29b3c26adb19aac1eb86f6b867af"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.180368 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" event={"ID":"435cf100-c5b6-4b1d-80a0-48b7a2688d72","Type":"ContainerStarted","Data":"4dd6547d280f70091a9f1e1ce6d3a07ca1f613bf4050ad0118f00806cbd985f6"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.184502 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" event={"ID":"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6","Type":"ContainerStarted","Data":"87f3202e597c0cc100fac2f237583882161b508d0e26a4488ec6495c312a5f13"} Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.205275 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7spb\" (UniqueName: \"kubernetes.io/projected/6502d829-f644-4207-823f-2ca6a0d682aa-kube-api-access-r7spb\") pod \"multus-admission-controller-857f4d67dd-rpbx6\" (UID: \"6502d829-f644-4207-823f-2ca6a0d682aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.245382 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.247486 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-bound-sa-token\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.249195 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2d8vs"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.252220 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271013 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/60256506-33fa-4551-bbb5-851b8679cf93-tmpfs\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271068 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ac67495b-dbbc-4e89-af14-334d26b5dc5a-certs\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271116 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271140 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xxmt\" (UniqueName: \"kubernetes.io/projected/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-kube-api-access-7xxmt\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271158 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-signing-cabundle\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271183 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4fb02ce1-52b1-451d-8e75-25a71830b204-srv-cert\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271215 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4fb02ce1-52b1-451d-8e75-25a71830b204-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4xjs\" (UniqueName: \"kubernetes.io/projected/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-kube-api-access-k4xjs\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271247 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfmpq\" (UniqueName: \"kubernetes.io/projected/25018c53-6299-4fab-bf5e-819ba4f84596-kube-api-access-gfmpq\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271273 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25018c53-6299-4fab-bf5e-819ba4f84596-secret-volume\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271286 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkg9b\" (UniqueName: \"kubernetes.io/projected/ac67495b-dbbc-4e89-af14-334d26b5dc5a-kube-api-access-nkg9b\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271311 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22e25f65-4f88-4385-bfed-a95ab8a662cf-cert\") pod \"ingress-canary-llq4c\" (UID: \"22e25f65-4f88-4385-bfed-a95ab8a662cf\") " pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d18d0b-96e1-4df3-a714-c30515624398-serving-cert\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271372 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-config-volume\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271386 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-metrics-tls\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271403 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-mountpoint-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271433 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60256506-33fa-4551-bbb5-851b8679cf93-webhook-cert\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-csi-data-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271492 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdx8j\" (UniqueName: \"kubernetes.io/projected/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-kube-api-access-wdx8j\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271542 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d18d0b-96e1-4df3-a714-c30515624398-config\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271560 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-socket-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271597 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-plugins-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2bf8\" (UniqueName: \"kubernetes.io/projected/05d18d0b-96e1-4df3-a714-c30515624398-kube-api-access-p2bf8\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25018c53-6299-4fab-bf5e-819ba4f84596-config-volume\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271674 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjnm\" (UniqueName: \"kubernetes.io/projected/4fb02ce1-52b1-451d-8e75-25a71830b204-kube-api-access-pmjnm\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tphtn\" (UniqueName: \"kubernetes.io/projected/60256506-33fa-4551-bbb5-851b8679cf93-kube-api-access-tphtn\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271715 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-signing-key\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271729 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-registration-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271756 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271773 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271790 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60256506-33fa-4551-bbb5-851b8679cf93-apiservice-cert\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gvxh\" (UniqueName: \"kubernetes.io/projected/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-kube-api-access-6gvxh\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271832 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ac67495b-dbbc-4e89-af14-334d26b5dc5a-node-bootstrap-token\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.271861 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrnt\" (UniqueName: \"kubernetes.io/projected/22e25f65-4f88-4385-bfed-a95ab8a662cf-kube-api-access-9jrnt\") pod \"ingress-canary-llq4c\" (UID: \"22e25f65-4f88-4385-bfed-a95ab8a662cf\") " pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.272684 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/60256506-33fa-4551-bbb5-851b8679cf93-tmpfs\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.274596 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-csi-data-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.275122 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-registration-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.275894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25018c53-6299-4fab-bf5e-819ba4f84596-config-volume\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.280162 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-socket-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.280960 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:15.780944454 +0000 UTC m=+150.442047682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.282579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-mountpoint-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.282874 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ac67495b-dbbc-4e89-af14-334d26b5dc5a-certs\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.283334 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-config-volume\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.283931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-plugins-dir\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.285268 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-signing-cabundle\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.286270 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.287944 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60256506-33fa-4551-bbb5-851b8679cf93-webhook-cert\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.290033 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-signing-key\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.290958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.291637 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d18d0b-96e1-4df3-a714-c30515624398-serving-cert\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.293063 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4fb02ce1-52b1-451d-8e75-25a71830b204-srv-cert\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.293111 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22e25f65-4f88-4385-bfed-a95ab8a662cf-cert\") pod \"ingress-canary-llq4c\" (UID: \"22e25f65-4f88-4385-bfed-a95ab8a662cf\") " pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.293401 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d18d0b-96e1-4df3-a714-c30515624398-config\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.295838 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ac67495b-dbbc-4e89-af14-334d26b5dc5a-node-bootstrap-token\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.296190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25018c53-6299-4fab-bf5e-819ba4f84596-secret-volume\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.296266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60256506-33fa-4551-bbb5-851b8679cf93-apiservice-cert\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.302129 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-metrics-tls\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.306191 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4fb02ce1-52b1-451d-8e75-25a71830b204-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.345711 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrnt\" (UniqueName: \"kubernetes.io/projected/22e25f65-4f88-4385-bfed-a95ab8a662cf-kube-api-access-9jrnt\") pod \"ingress-canary-llq4c\" (UID: \"22e25f65-4f88-4385-bfed-a95ab8a662cf\") " pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.363160 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkg9b\" (UniqueName: \"kubernetes.io/projected/ac67495b-dbbc-4e89-af14-334d26b5dc5a-kube-api-access-nkg9b\") pod \"machine-config-server-x2c2k\" (UID: \"ac67495b-dbbc-4e89-af14-334d26b5dc5a\") " pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.372987 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.373485 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:15.873467199 +0000 UTC m=+150.534570427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.376845 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gvxh\" (UniqueName: \"kubernetes.io/projected/1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb-kube-api-access-6gvxh\") pod \"service-ca-9c57cc56f-dgcks\" (UID: \"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.388525 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.394994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdx8j\" (UniqueName: \"kubernetes.io/projected/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-kube-api-access-wdx8j\") pod \"marketplace-operator-79b997595-qwjzc\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.399340 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.406919 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.410359 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tphtn\" (UniqueName: \"kubernetes.io/projected/60256506-33fa-4551-bbb5-851b8679cf93-kube-api-access-tphtn\") pod \"packageserver-d55dfcdfc-znsxg\" (UID: \"60256506-33fa-4551-bbb5-851b8679cf93\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.422875 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.423761 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfmpq\" (UniqueName: \"kubernetes.io/projected/25018c53-6299-4fab-bf5e-819ba4f84596-kube-api-access-gfmpq\") pod \"collect-profiles-29525100-bq4kn\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.439011 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xxmt\" (UniqueName: \"kubernetes.io/projected/f9dd0952-ba1e-4f66-aa15-dfa87db27fd8-kube-api-access-7xxmt\") pod \"dns-default-9wmt2\" (UID: \"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8\") " pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.450901 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.455205 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.462369 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2bf8\" (UniqueName: \"kubernetes.io/projected/05d18d0b-96e1-4df3-a714-c30515624398-kube-api-access-p2bf8\") pod \"service-ca-operator-777779d784-frkp7\" (UID: \"05d18d0b-96e1-4df3-a714-c30515624398\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.468855 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.474690 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.475474 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:15.97545873 +0000 UTC m=+150.636561948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.475828 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.481284 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4xjs\" (UniqueName: \"kubernetes.io/projected/da029de4-0b02-44dd-ab8f-8b6f2f7f41af-kube-api-access-k4xjs\") pod \"csi-hostpathplugin-mvbrc\" (UID: \"da029de4-0b02-44dd-ab8f-8b6f2f7f41af\") " pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.506087 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.508857 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-llq4c" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.509958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjnm\" (UniqueName: \"kubernetes.io/projected/4fb02ce1-52b1-451d-8e75-25a71830b204-kube-api-access-pmjnm\") pod \"olm-operator-6b444d44fb-fjhdc\" (UID: \"4fb02ce1-52b1-451d-8e75-25a71830b204\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.516747 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.528804 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x2c2k" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.566662 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.568351 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4hktn"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.575980 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.576257 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.076224364 +0000 UTC m=+150.737327592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.576282 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr"] Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.678280 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.678742 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.178721699 +0000 UTC m=+150.839825017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.741011 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.761604 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" Feb 19 13:12:15 crc kubenswrapper[4861]: W0219 13:12:15.766320 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089f2a06_1824_49c8_ad48_b119bf6a9d63.slice/crio-706da521faab3f00166cddd3fb0a41f0491ff1339a3a668a40c147d98dd31a82 WatchSource:0}: Error finding container 706da521faab3f00166cddd3fb0a41f0491ff1339a3a668a40c147d98dd31a82: Status 404 returned error can't find the container with id 706da521faab3f00166cddd3fb0a41f0491ff1339a3a668a40c147d98dd31a82 Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.779434 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.780009 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.279990798 +0000 UTC m=+150.941094026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.900025 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:15 crc kubenswrapper[4861]: E0219 13:12:15.900389 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.400377334 +0000 UTC m=+151.061480562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:15 crc kubenswrapper[4861]: I0219 13:12:15.953680 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" podStartSLOduration=122.953663024 podStartE2EDuration="2m2.953663024s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:15.930036511 +0000 UTC m=+150.591139769" watchObservedRunningTime="2026-02-19 13:12:15.953663024 +0000 UTC m=+150.614766252" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.003998 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.004381 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.504358896 +0000 UTC m=+151.165462124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.004618 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.004961 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.504947483 +0000 UTC m=+151.166050721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.079696 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.081734 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gglfz"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.105914 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.105953 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv"] Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.106177 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.60615622 +0000 UTC m=+151.267259448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.106349 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.106821 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.60681101 +0000 UTC m=+151.267914238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.110703 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.205339 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" event={"ID":"3e563434-f9d2-4932-a791-cfffe2de6e5b","Type":"ContainerStarted","Data":"a01196b75885aaf00f9ace000fc8a3591ffb73c723bc3d381c5987bf2baff477"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.210511 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.210958 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.710942896 +0000 UTC m=+151.372046124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.222554 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" event={"ID":"2372d9bc-babb-4932-8adb-a138b6c0ec28","Type":"ContainerStarted","Data":"8461c455e9772103d178b83ed75cfd099e5fa7bc26c2974af1e3ae3e261a2fd9"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.222614 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" event={"ID":"2372d9bc-babb-4932-8adb-a138b6c0ec28","Type":"ContainerStarted","Data":"d854b7488c48315ce5651f1c7d96c8de37f002f6903c34e07ca8444cb2e67b76"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.226541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bs6h" event={"ID":"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b","Type":"ContainerStarted","Data":"0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.252441 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" event={"ID":"089f2a06-1824-49c8-ad48-b119bf6a9d63","Type":"ContainerStarted","Data":"706da521faab3f00166cddd3fb0a41f0491ff1339a3a668a40c147d98dd31a82"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.266239 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" event={"ID":"9106f3fb-f347-44c0-8343-c3096803e845","Type":"ContainerStarted","Data":"344a4c7104b11afe1bdff8e3d0b91498ecdfb81d5d1ac4953361f554582db8f6"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.276241 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9zmzh" event={"ID":"ed3e037e-4dc7-4240-b54f-20931407f4a3","Type":"ContainerStarted","Data":"508dc08ee2c72f95b7a74da7326da95948d7a8898236a2c5038bc79826cdc715"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.284090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" event={"ID":"6be72510-28ab-44e6-ae93-6930c521dc8d","Type":"ContainerStarted","Data":"1a8026d83ab6d7d5e57c6272974e15118446e7d17b44dfe8fe0e513166a8879c"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.287160 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rw85s" event={"ID":"59847917-735f-49c7-99b2-599facec7e03","Type":"ContainerStarted","Data":"8c350fe35bcdf5a30f658144aedf4f0979a66a07fac6706385337a31ce1ea795"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.287230 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rw85s" event={"ID":"59847917-735f-49c7-99b2-599facec7e03","Type":"ContainerStarted","Data":"716aeaf06da79b7a1d1f19f9d3e49aefab413d3e98118103abf67b30d4939f86"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.288885 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.290843 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" event={"ID":"1df1afb5-9ca0-4b18-93a7-80381e175ec4","Type":"ContainerStarted","Data":"27a17a4af23717052073d851dea52bc427cb45eba9cf106c31b0bb409fd7903c"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.291394 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.295500 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gglfz" event={"ID":"91f8886f-babc-495e-86fb-475d8582d6ac","Type":"ContainerStarted","Data":"7cc84776297bd0b84fe349cbceff220d6c3c374deb99168d6e0763633cd339f1"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.296760 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-rw85s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.296829 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rw85s" podUID="59847917-735f-49c7-99b2-599facec7e03" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.298063 4861 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fn4fs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.298159 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" podUID="1df1afb5-9ca0-4b18-93a7-80381e175ec4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.305943 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" event={"ID":"47c66ae1-c442-4718-8663-6934eb402aea","Type":"ContainerStarted","Data":"a3c7cd96c5c174484175271972f708ce2f18de50e41bbc50a1f9ae04ba6a24c2"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.318955 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.320222 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.820208266 +0000 UTC m=+151.481311494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.333743 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" event={"ID":"8d3139b4-07d9-43eb-b9da-31d2c9218ab9","Type":"ContainerStarted","Data":"7f1b4e9ca8eb735bf1e104b3f4d46860fcff881eb1c61804ee0ee2c9a4936bf2"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.335243 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" event={"ID":"8d3139b4-07d9-43eb-b9da-31d2c9218ab9","Type":"ContainerStarted","Data":"390293c5eb877ce527fb48d302f4b998e350ddf6f9a390b609819e0b8d00861b"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.356879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" event={"ID":"5a027437-d0c1-4d20-9209-ac4815006ed3","Type":"ContainerStarted","Data":"69ebafca13274650414a11922bffcf647c85d046947d40c35fcad79fe14c1682"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.368135 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" event={"ID":"1d093c48-67cb-4e6d-99bb-3c8df32c6f15","Type":"ContainerStarted","Data":"a28eca41d9d168c6e684bcba952816df99b0f46c21e087bdcbc29a4bd43f21e5"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.378321 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" event={"ID":"bf366b45-962f-48b9-92e0-7a0c6ee35c7a","Type":"ContainerStarted","Data":"0b52b761811503e1a17b13589d815ce85138eef29c6cf46328ff2e7f168a97e5"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.399252 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" event={"ID":"aad5d49e-b605-4151-a30d-1db9ffa9b99b","Type":"ContainerStarted","Data":"4c99da55eb3545fb3037ba407979d3d7596bd7c7a42bb7ee7ba3b7a30c261377"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.410386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" event={"ID":"88bd7f5e-efc9-4ee9-88fd-d474e17c3bd6","Type":"ContainerStarted","Data":"73272ec5d3d5bc1c3be27986bd8ee9e876b759d007427c667d66168293fe50a0"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.420223 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.420401 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.920379612 +0000 UTC m=+151.581482840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.420808 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.422255 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:16.922243528 +0000 UTC m=+151.583346746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.422829 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" event={"ID":"db2cfad6-b1c8-46ee-8f79-6072ffb59471","Type":"ContainerStarted","Data":"b32e443302f57964b8b4f51d67f5d610ecb7bb4396e50d00d725e3e75e233905"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.422863 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" event={"ID":"db2cfad6-b1c8-46ee-8f79-6072ffb59471","Type":"ContainerStarted","Data":"f5e204a03987a45fca802c9742ec0ef43bb8a949288d7ee05f37c3f4e4774e5a"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.423003 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.433488 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" event={"ID":"6716c3af-de13-4b1c-a2b6-ebb3b968c617","Type":"ContainerStarted","Data":"f25f4618b8edbf8fb7fc21372b205ce8bc09e4e69d7a6fff9c9ea3059e513c72"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.440864 4861 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2d8vs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.440922 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" podUID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.478877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x2c2k" event={"ID":"ac67495b-dbbc-4e89-af14-334d26b5dc5a","Type":"ContainerStarted","Data":"5cd3203872cea1d8322aa411e4a2e6294a497478ca2a8bce5033bfbefa4f572f"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.478925 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x2c2k" event={"ID":"ac67495b-dbbc-4e89-af14-334d26b5dc5a","Type":"ContainerStarted","Data":"f483dc8424ce86561273449e5ed2de5e6671c86a80bbcd1709c80edad8cd4209"} Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.489641 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.522132 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.523462 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.023440175 +0000 UTC m=+151.684543413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.554126 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" podStartSLOduration=123.554104631 podStartE2EDuration="2m3.554104631s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:16.461139433 +0000 UTC m=+151.122242661" watchObservedRunningTime="2026-02-19 13:12:16.554104631 +0000 UTC m=+151.215207869" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.627012 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.632069 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.132051576 +0000 UTC m=+151.793154804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.715705 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.732326 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.732766 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.232743268 +0000 UTC m=+151.893846496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.739090 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.758059 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.763226 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.807029 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.807798 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-89dlr" podStartSLOduration=123.807775503 podStartE2EDuration="2m3.807775503s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:16.805840915 +0000 UTC m=+151.466944143" watchObservedRunningTime="2026-02-19 13:12:16.807775503 +0000 UTC m=+151.468878731" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.808077 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.814321 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:16 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:16 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:16 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.814415 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:16 crc kubenswrapper[4861]: W0219 13:12:16.816292 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c3695d_6228_4722_8394_a31ec8e7333c.slice/crio-28f36d7d2bbea258ba438d9edaedde9c69f59ba1449f93b5a84e63712972884a WatchSource:0}: Error finding container 28f36d7d2bbea258ba438d9edaedde9c69f59ba1449f93b5a84e63712972884a: Status 404 returned error can't find the container with id 28f36d7d2bbea258ba438d9edaedde9c69f59ba1449f93b5a84e63712972884a Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.830172 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d27vq"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.833924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.834495 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.334484231 +0000 UTC m=+151.995587459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: W0219 13:12:16.858397 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25018c53_6299_4fab_bf5e_819ba4f84596.slice/crio-fe714fd141abc5387909a03ac432a2e0ee725356e3a1d0e540f49899ecc41f3c WatchSource:0}: Error finding container fe714fd141abc5387909a03ac432a2e0ee725356e3a1d0e540f49899ecc41f3c: Status 404 returned error can't find the container with id fe714fd141abc5387909a03ac432a2e0ee725356e3a1d0e540f49899ecc41f3c Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.886985 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.888327 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.938538 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:16 crc kubenswrapper[4861]: E0219 13:12:16.938852 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.438838902 +0000 UTC m=+152.099942130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.958161 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lgkf4"] Feb 19 13:12:16 crc kubenswrapper[4861]: I0219 13:12:16.995618 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.004057 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dgcks"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.039610 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.040159 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.540142152 +0000 UTC m=+152.201245380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: W0219 13:12:17.048101 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60256506_33fa_4551_bbb5_851b8679cf93.slice/crio-1d4bdf9656703c5ae44407f6668fd0ffa1837c228dd4387de7527abff8f4e006 WatchSource:0}: Error finding container 1d4bdf9656703c5ae44407f6668fd0ffa1837c228dd4387de7527abff8f4e006: Status 404 returned error can't find the container with id 1d4bdf9656703c5ae44407f6668fd0ffa1837c228dd4387de7527abff8f4e006 Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.051535 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwjzc"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.058182 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc"] Feb 19 13:12:17 crc kubenswrapper[4861]: W0219 13:12:17.069788 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c5fdc2b_06f7_416e_9c27_058bd2b8a7eb.slice/crio-7ea405ce5410153997cc0650cf29686f4cbd39c5520a9e65747c650763e0aa07 WatchSource:0}: Error finding container 7ea405ce5410153997cc0650cf29686f4cbd39c5520a9e65747c650763e0aa07: Status 404 returned error can't find the container with id 7ea405ce5410153997cc0650cf29686f4cbd39c5520a9e65747c650763e0aa07 Feb 19 13:12:17 crc kubenswrapper[4861]: W0219 13:12:17.087654 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa335a0a_4c08_4593_8c73_e0c2adeb76b7.slice/crio-597cb3ba1887f8128bd2f2d22157c79728bed152b35c4d1d8e563be98ae4eabd WatchSource:0}: Error finding container 597cb3ba1887f8128bd2f2d22157c79728bed152b35c4d1d8e563be98ae4eabd: Status 404 returned error can't find the container with id 597cb3ba1887f8128bd2f2d22157c79728bed152b35c4d1d8e563be98ae4eabd Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.107391 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rpbx6"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.107563 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-llq4c"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.117460 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-frkp7"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.125181 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-x2c2k" podStartSLOduration=5.12515656 podStartE2EDuration="5.12515656s" podCreationTimestamp="2026-02-19 13:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.116133668 +0000 UTC m=+151.777236916" watchObservedRunningTime="2026-02-19 13:12:17.12515656 +0000 UTC m=+151.786259788" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.141767 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.142945 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.144395 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.644359601 +0000 UTC m=+152.305462829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.144500 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.144707 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.144741 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.144776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.147140 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.647128494 +0000 UTC m=+152.308231722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.154877 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.159716 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.160524 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.162516 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.212325 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.222610 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4bs6h" podStartSLOduration=124.222587044 podStartE2EDuration="2m4.222587044s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.211918341 +0000 UTC m=+151.873021569" watchObservedRunningTime="2026-02-19 13:12:17.222587044 +0000 UTC m=+151.883690272" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.223798 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvbrc"] Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.229637 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.237717 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.250668 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.251110 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.751096184 +0000 UTC m=+152.412199412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.281433 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9wmt2"] Feb 19 13:12:17 crc kubenswrapper[4861]: W0219 13:12:17.324301 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9dd0952_ba1e_4f66_aa15_dfa87db27fd8.slice/crio-f9f45274432e19119dc7ab9f03585ed045fda0d8af62648bb2d9e3ea5e8b1738 WatchSource:0}: Error finding container f9f45274432e19119dc7ab9f03585ed045fda0d8af62648bb2d9e3ea5e8b1738: Status 404 returned error can't find the container with id f9f45274432e19119dc7ab9f03585ed045fda0d8af62648bb2d9e3ea5e8b1738 Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.352249 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.354079 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.854057275 +0000 UTC m=+152.515160673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.355374 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" podStartSLOduration=124.355357494 podStartE2EDuration="2m4.355357494s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.335940838 +0000 UTC m=+151.997044066" watchObservedRunningTime="2026-02-19 13:12:17.355357494 +0000 UTC m=+152.016460722" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.382912 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mzhlv" podStartSLOduration=124.382882575 podStartE2EDuration="2m4.382882575s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.354357354 +0000 UTC m=+152.015460582" watchObservedRunningTime="2026-02-19 13:12:17.382882575 +0000 UTC m=+152.043985793" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.454199 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.454567 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:17.9545522 +0000 UTC m=+152.615655418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.525840 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tf29c" podStartSLOduration=124.525814663 podStartE2EDuration="2m4.525814663s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.522177953 +0000 UTC m=+152.183281181" watchObservedRunningTime="2026-02-19 13:12:17.525814663 +0000 UTC m=+152.186917891" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.557493 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" event={"ID":"25efe6ba-5133-429c-9b89-63bdc857d930","Type":"ContainerStarted","Data":"47fb0edcc35dd72326d487c9ef62ed24534a0f7c063fbb2b2fe66f18cfea69e6"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.557845 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.558362 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.058345016 +0000 UTC m=+152.719448244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.591567 4861 csr.go:261] certificate signing request csr-4wdp9 is approved, waiting to be issued Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.592875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" event={"ID":"5a027437-d0c1-4d20-9209-ac4815006ed3","Type":"ContainerStarted","Data":"3c30525d27e612a7ba7c314a6d86ee29bf8fdde0be7a1d6913bab75075c1173c"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.593065 4861 csr.go:257] certificate signing request csr-4wdp9 is issued Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.613379 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" event={"ID":"da029de4-0b02-44dd-ab8f-8b6f2f7f41af","Type":"ContainerStarted","Data":"1bf54a2243634c6016372c10123421ac1ce9fcc6a0796e0c7c1e172ef3b72ccb"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.620278 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9zmzh" podStartSLOduration=124.620250616 podStartE2EDuration="2m4.620250616s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.606319535 +0000 UTC m=+152.267422763" watchObservedRunningTime="2026-02-19 13:12:17.620250616 +0000 UTC m=+152.281353844" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.640613 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mk9tc" podStartSLOduration=124.64058931 podStartE2EDuration="2m4.64058931s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.640545168 +0000 UTC m=+152.301648396" watchObservedRunningTime="2026-02-19 13:12:17.64058931 +0000 UTC m=+152.301692538" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.667316 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.668933 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.168909125 +0000 UTC m=+152.830012343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.686283 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9wmt2" event={"ID":"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8","Type":"ContainerStarted","Data":"f9f45274432e19119dc7ab9f03585ed045fda0d8af62648bb2d9e3ea5e8b1738"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.687470 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwhxn" podStartSLOduration=124.687456836 podStartE2EDuration="2m4.687456836s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.680328461 +0000 UTC m=+152.341431689" watchObservedRunningTime="2026-02-19 13:12:17.687456836 +0000 UTC m=+152.348560064" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.726992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" event={"ID":"a90d3758-749b-4327-877f-ecb89c49b5e0","Type":"ContainerStarted","Data":"7318ece424d99f3f2f5813cac3866da4f8c222ed4fd33b8f5a3c35dcd45edb21"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.768792 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.769104 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.269079131 +0000 UTC m=+152.930182359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.785895 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" event={"ID":"aad5d49e-b605-4151-a30d-1db9ffa9b99b","Type":"ContainerStarted","Data":"cd0f8448cc432479ddb5791c4caea731b168c2ab6e3d6afa62744097f34f89f1"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.785964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" event={"ID":"aad5d49e-b605-4151-a30d-1db9ffa9b99b","Type":"ContainerStarted","Data":"5bbd28096847fce6999c1b84301c00225effa3d070d06ea08e8aa8c0e439ae83"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.786791 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.798347 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m7bvc" podStartSLOduration=124.798332165 podStartE2EDuration="2m4.798332165s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.796919922 +0000 UTC m=+152.458023150" watchObservedRunningTime="2026-02-19 13:12:17.798332165 +0000 UTC m=+152.459435393" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.810683 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" event={"ID":"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb","Type":"ContainerStarted","Data":"7ea405ce5410153997cc0650cf29686f4cbd39c5520a9e65747c650763e0aa07"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.817709 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:17 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:17 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:17 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.817778 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.820830 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" event={"ID":"eb6e1d5f-e904-4931-9d35-e1b4b1a30361","Type":"ContainerStarted","Data":"17b16c9a7caba1066fa179544f7b711ad34c6524b8be346689d012d82be71393"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.835812 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" event={"ID":"1d093c48-67cb-4e6d-99bb-3c8df32c6f15","Type":"ContainerStarted","Data":"2843d3f3b8c8582f9eedb0b23dfd15904f0851bc470beb61debaa480dd439ad7"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.850732 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" podStartSLOduration=124.850705277 podStartE2EDuration="2m4.850705277s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:17.84981572 +0000 UTC m=+152.510918948" watchObservedRunningTime="2026-02-19 13:12:17.850705277 +0000 UTC m=+152.511808505" Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.873256 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.875703 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.375683821 +0000 UTC m=+153.036787049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.909730 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" event={"ID":"05d18d0b-96e1-4df3-a714-c30515624398","Type":"ContainerStarted","Data":"1623ad812eb80ea843ce7954791bacec48770d31b511a04362b32db32e2010cf"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.940723 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" event={"ID":"089f2a06-1824-49c8-ad48-b119bf6a9d63","Type":"ContainerStarted","Data":"c81c99c8a44a06ba4df64c25e6916aec335d927f465005a0b02d04a37846ae6a"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.940777 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" event={"ID":"089f2a06-1824-49c8-ad48-b119bf6a9d63","Type":"ContainerStarted","Data":"67f601b5902e274d92b755f40481d8150cb0f051ecd7bb7c86082626fd071e10"} Feb 19 13:12:17 crc kubenswrapper[4861]: I0219 13:12:17.990263 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:17 crc kubenswrapper[4861]: E0219 13:12:17.991398 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.491376656 +0000 UTC m=+153.152479884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.033312 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" event={"ID":"f3bb6cb9-0c94-47e3-a796-dca0ba8491ae","Type":"ContainerStarted","Data":"870703f0623f272af8961fd8d9db176fbcf7605ecba31a2dfbbcc7533b7a07c5"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.052204 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" event={"ID":"1df1afb5-9ca0-4b18-93a7-80381e175ec4","Type":"ContainerStarted","Data":"6cb873644ca284ed31ad28df9c36b6ff6fdb07c2f57811cc3bca6ff6c8c6b511"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.084905 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" event={"ID":"a1bd7de0-543b-45cf-8ca8-b647d17671eb","Type":"ContainerStarted","Data":"45a38e200c887e47095ceb6ba6eec4fbf9ab82b77a5dbcf72a4cdf7b45e16d6c"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.101128 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.101244 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.601218764 +0000 UTC m=+153.262321982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.101330 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.101680 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.601668818 +0000 UTC m=+153.262772046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.102632 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" event={"ID":"4fb02ce1-52b1-451d-8e75-25a71830b204","Type":"ContainerStarted","Data":"86e98c819de5fc791ee8c014ad4d47f04f966035e14ff77f825e86a93edc80a5"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.117399 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fn4fs" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.139397 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-llq4c" event={"ID":"22e25f65-4f88-4385-bfed-a95ab8a662cf","Type":"ContainerStarted","Data":"20091e1db4cf9561aa27b169880ee615249d5496df469219156a8963716af205"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.140805 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" event={"ID":"21cf889c-d5ea-4188-95cf-46ee6f626ff1","Type":"ContainerStarted","Data":"10707ef39ee91393db5de86c388e852d61965b344c37c4cfe8a2ec95fba137ed"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.140842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" event={"ID":"21cf889c-d5ea-4188-95cf-46ee6f626ff1","Type":"ContainerStarted","Data":"2e1e14701629525520fb97f6fc07ce1d4e76bdcff8e18c1fb7b60d856e97f23b"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.141412 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rw85s" podStartSLOduration=125.141377857 podStartE2EDuration="2m5.141377857s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.120092174 +0000 UTC m=+152.781195412" watchObservedRunningTime="2026-02-19 13:12:18.141377857 +0000 UTC m=+152.802481085" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.142456 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" event={"ID":"9106f3fb-f347-44c0-8343-c3096803e845","Type":"ContainerStarted","Data":"51dd51093296f3e3a6a9278bd8eedf4e3a4d0e618f22a5be7dc7b6b13c431055"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.182242 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gglfz" event={"ID":"91f8886f-babc-495e-86fb-475d8582d6ac","Type":"ContainerStarted","Data":"c88997fb14130d3080c79a05f6487d985ba6c2826a7dd8b35cf67ad9cce9a8e3"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.182960 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" podStartSLOduration=125.182939673 podStartE2EDuration="2m5.182939673s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.181245211 +0000 UTC m=+152.842348439" watchObservedRunningTime="2026-02-19 13:12:18.182939673 +0000 UTC m=+152.844042901" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.183157 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.184263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" event={"ID":"aa335a0a-4c08-4593-8c73-e0c2adeb76b7","Type":"ContainerStarted","Data":"597cb3ba1887f8128bd2f2d22157c79728bed152b35c4d1d8e563be98ae4eabd"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.184914 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" event={"ID":"60256506-33fa-4551-bbb5-851b8679cf93","Type":"ContainerStarted","Data":"1d4bdf9656703c5ae44407f6668fd0ffa1837c228dd4387de7527abff8f4e006"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.203801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" event={"ID":"3e563434-f9d2-4932-a791-cfffe2de6e5b","Type":"ContainerStarted","Data":"6be3027430379ab41714c03ef8e317275163344bd7f7a33c8b740095385e6900"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.203912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" event={"ID":"3e563434-f9d2-4932-a791-cfffe2de6e5b","Type":"ContainerStarted","Data":"a6eb1b2d825f34fa70b98277bc0a3c5e8089cd37ce58eb7b444564a26be250d6"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.205157 4861 patch_prober.go:28] interesting pod/console-operator-58897d9998-gglfz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.205205 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gglfz" podUID="91f8886f-babc-495e-86fb-475d8582d6ac" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.205979 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.207090 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.707061801 +0000 UTC m=+153.368165029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.238917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" event={"ID":"25018c53-6299-4fab-bf5e-819ba4f84596","Type":"ContainerStarted","Data":"fe714fd141abc5387909a03ac432a2e0ee725356e3a1d0e540f49899ecc41f3c"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.239358 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4hktn" podStartSLOduration=125.239349926 podStartE2EDuration="2m5.239349926s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.238649435 +0000 UTC m=+152.899752663" watchObservedRunningTime="2026-02-19 13:12:18.239349926 +0000 UTC m=+152.900453154" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.256825 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" event={"ID":"2372d9bc-babb-4932-8adb-a138b6c0ec28","Type":"ContainerStarted","Data":"bb1165eb481f921bb0263a22832d528f2edf2bfd96f65f73fcad515cdcce46ca"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.272523 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" podStartSLOduration=125.272501868 podStartE2EDuration="2m5.272501868s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.271759675 +0000 UTC m=+152.932862903" watchObservedRunningTime="2026-02-19 13:12:18.272501868 +0000 UTC m=+152.933605096" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.286734 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2c3695d-6228-4722-8394-a31ec8e7333c" containerID="d9df87262dfce6a375ef58c52307d89103a02142a1aa83bf04fd86d7e2ecc563" exitCode=0 Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.286859 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" event={"ID":"b2c3695d-6228-4722-8394-a31ec8e7333c","Type":"ContainerDied","Data":"d9df87262dfce6a375ef58c52307d89103a02142a1aa83bf04fd86d7e2ecc563"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.286904 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" event={"ID":"b2c3695d-6228-4722-8394-a31ec8e7333c","Type":"ContainerStarted","Data":"28f36d7d2bbea258ba438d9edaedde9c69f59ba1449f93b5a84e63712972884a"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.310787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" event={"ID":"6502d829-f644-4207-823f-2ca6a0d682aa","Type":"ContainerStarted","Data":"087cd7c6205c4dc896739666a4111057d8630d3afaa9a1251a44ec2150f176aa"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.311912 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jrlx" podStartSLOduration=125.311894358 podStartE2EDuration="2m5.311894358s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.310792845 +0000 UTC m=+152.971896063" watchObservedRunningTime="2026-02-19 13:12:18.311894358 +0000 UTC m=+152.972997576" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.323320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.324112 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.824100506 +0000 UTC m=+153.485203734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.328403 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" event={"ID":"52ec89ec-1759-416b-87c8-b90b4194a960","Type":"ContainerStarted","Data":"2d9cbb7675b1c312445a4ace6b9bfac1487ad103f1f099a095599446829c2313"} Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.333444 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-rw85s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.333486 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rw85s" podUID="59847917-735f-49c7-99b2-599facec7e03" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.343351 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.365340 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" podStartSLOduration=125.365309862 podStartE2EDuration="2m5.365309862s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.346005218 +0000 UTC m=+153.007108446" watchObservedRunningTime="2026-02-19 13:12:18.365309862 +0000 UTC m=+153.026413090" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.388199 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdpbv" podStartSLOduration=125.388173972 podStartE2EDuration="2m5.388173972s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.38610939 +0000 UTC m=+153.047212618" watchObservedRunningTime="2026-02-19 13:12:18.388173972 +0000 UTC m=+153.049277210" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.425550 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.429874 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:18.929847471 +0000 UTC m=+153.590950699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.441093 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cbzz2" podStartSLOduration=125.44107163 podStartE2EDuration="2m5.44107163s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.439776691 +0000 UTC m=+153.100879919" watchObservedRunningTime="2026-02-19 13:12:18.44107163 +0000 UTC m=+153.102174858" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.526630 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t62dj" podStartSLOduration=125.526608144 podStartE2EDuration="2m5.526608144s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.524152589 +0000 UTC m=+153.185255817" watchObservedRunningTime="2026-02-19 13:12:18.526608144 +0000 UTC m=+153.187711372" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.533167 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.545729 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.045702561 +0000 UTC m=+153.706805779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.591684 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gglfz" podStartSLOduration=125.591652969 podStartE2EDuration="2m5.591652969s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.566243841 +0000 UTC m=+153.227347069" watchObservedRunningTime="2026-02-19 13:12:18.591652969 +0000 UTC m=+153.252756197" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.596612 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 13:07:17 +0000 UTC, rotation deadline is 2027-01-06 02:52:51.935150449 +0000 UTC Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.596669 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7693h40m33.338484259s for next certificate rotation Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.634624 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.635226 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.135202285 +0000 UTC m=+153.796305513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.715410 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hj79t" podStartSLOduration=125.715371506 podStartE2EDuration="2m5.715371506s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.69099768 +0000 UTC m=+153.352100898" watchObservedRunningTime="2026-02-19 13:12:18.715371506 +0000 UTC m=+153.376474734" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.737984 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.738342 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.238328819 +0000 UTC m=+153.899432047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.820241 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:18 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:18 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:18 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.820293 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.837658 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" podStartSLOduration=125.837474055 podStartE2EDuration="2m5.837474055s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.741034881 +0000 UTC m=+153.402138109" watchObservedRunningTime="2026-02-19 13:12:18.837474055 +0000 UTC m=+153.498577283" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.839169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.839560 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.339544418 +0000 UTC m=+154.000647646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.924527 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" podStartSLOduration=125.924501813 podStartE2EDuration="2m5.924501813s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.912899813 +0000 UTC m=+153.574003041" watchObservedRunningTime="2026-02-19 13:12:18.924501813 +0000 UTC m=+153.585605061" Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.947593 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:18 crc kubenswrapper[4861]: E0219 13:12:18.947954 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.447942022 +0000 UTC m=+154.109045250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:18 crc kubenswrapper[4861]: I0219 13:12:18.992624 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwqfx" podStartSLOduration=125.992609241 podStartE2EDuration="2m5.992609241s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:18.946835178 +0000 UTC m=+153.607938406" watchObservedRunningTime="2026-02-19 13:12:18.992609241 +0000 UTC m=+153.653712469" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.037854 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5nqk"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.048538 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.048987 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.548958443 +0000 UTC m=+154.210061671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.056642 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5nqk"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.056804 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.064308 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.150522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-utilities\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.150565 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.150623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-catalog-content\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.150641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxw4\" (UniqueName: \"kubernetes.io/projected/4e4630be-249d-4f67-bd5c-eafaf08b2705-kube-api-access-llxw4\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.150920 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.650908563 +0000 UTC m=+154.312011791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.263079 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.263204 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-catalog-content\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.263234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxw4\" (UniqueName: \"kubernetes.io/projected/4e4630be-249d-4f67-bd5c-eafaf08b2705-kube-api-access-llxw4\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.263280 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-utilities\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.272057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-utilities\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.272453 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.772438094 +0000 UTC m=+154.433541322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.285434 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.289743 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-catalog-content\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.292634 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.340434 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxw4\" (UniqueName: \"kubernetes.io/projected/4e4630be-249d-4f67-bd5c-eafaf08b2705-kube-api-access-llxw4\") pod \"community-operators-q5nqk\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.369219 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.369510 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.869499355 +0000 UTC m=+154.530602583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.369665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.391056 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5pg7p" event={"ID":"a1bd7de0-543b-45cf-8ca8-b647d17671eb","Type":"ContainerStarted","Data":"559b819f09487285c9aa465fea1954dc0d61be544732c3498eef4be964cd254d"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.408563 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d8qvw"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.414694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-llq4c" event={"ID":"22e25f65-4f88-4385-bfed-a95ab8a662cf","Type":"ContainerStarted","Data":"160dd873a1e7e71291007d17271b4696510d26b0753e014014b994aa147ca1eb"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.414841 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.476717 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.481476 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.481706 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.981688544 +0000 UTC m=+154.642791772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.482065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-catalog-content\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.482146 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-utilities\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.482258 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.482382 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvb7\" (UniqueName: \"kubernetes.io/projected/26c3ab3c-b007-48bf-9267-1be0df74a551-kube-api-access-hlvb7\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.484895 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:19.98487822 +0000 UTC m=+154.645981448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.496243 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8qvw"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.518704 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" event={"ID":"f3bb6cb9-0c94-47e3-a796-dca0ba8491ae","Type":"ContainerStarted","Data":"c46de4979295a9cba92b1cb038fc19eedf288d0d755db912aef91533e6934644"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.518919 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" event={"ID":"f3bb6cb9-0c94-47e3-a796-dca0ba8491ae","Type":"ContainerStarted","Data":"da2398b068cd50a7e6b1724504460c029bb77255c94e2273c7bc876561097a3e"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.525927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" event={"ID":"aa335a0a-4c08-4593-8c73-e0c2adeb76b7","Type":"ContainerStarted","Data":"c88eb6c5983f5719b7427630164b540b61a7cb5dc082b7a09881c8e2de81af41"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.528380 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.585757 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qwjzc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.585842 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.586288 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.586833 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvb7\" (UniqueName: \"kubernetes.io/projected/26c3ab3c-b007-48bf-9267-1be0df74a551-kube-api-access-hlvb7\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.586975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-catalog-content\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.587013 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-utilities\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.587486 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-utilities\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.587597 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.087579143 +0000 UTC m=+154.748682371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.594111 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-catalog-content\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.604384 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-llq4c" podStartSLOduration=7.60434491 podStartE2EDuration="7.60434491s" podCreationTimestamp="2026-02-19 13:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:19.583092108 +0000 UTC m=+154.244195336" watchObservedRunningTime="2026-02-19 13:12:19.60434491 +0000 UTC m=+154.265448138" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.647283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvb7\" (UniqueName: \"kubernetes.io/projected/26c3ab3c-b007-48bf-9267-1be0df74a551-kube-api-access-hlvb7\") pod \"community-operators-d8qvw\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.652492 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-khrzg"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.654473 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.657142 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.693681 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" event={"ID":"b2c3695d-6228-4722-8394-a31ec8e7333c","Type":"ContainerStarted","Data":"737a18980634daacb152cdc5caa796d7641837cf3dfbedfd0b21d814b15dc4ef"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.694605 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.695720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.696094 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.19607666 +0000 UTC m=+154.857179888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.702754 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khrzg"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.706233 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" podStartSLOduration=126.706195986 podStartE2EDuration="2m6.706195986s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:19.638058858 +0000 UTC m=+154.299162086" watchObservedRunningTime="2026-02-19 13:12:19.706195986 +0000 UTC m=+154.367299214" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.733970 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7wkxt" podStartSLOduration=126.733932423 podStartE2EDuration="2m6.733932423s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:19.691687528 +0000 UTC m=+154.352790756" watchObservedRunningTime="2026-02-19 13:12:19.733932423 +0000 UTC m=+154.395035651" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.749947 4861 generic.go:334] "Generic (PLEG): container finished" podID="52ec89ec-1759-416b-87c8-b90b4194a960" containerID="0624f025c1b72e9df0ca69ce515b1c43b051eee2d0d51f43588c0e3036783c8a" exitCode=0 Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.750301 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" event={"ID":"52ec89ec-1759-416b-87c8-b90b4194a960","Type":"ContainerDied","Data":"0624f025c1b72e9df0ca69ce515b1c43b051eee2d0d51f43588c0e3036783c8a"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.773998 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.774784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" event={"ID":"25018c53-6299-4fab-bf5e-819ba4f84596","Type":"ContainerStarted","Data":"4f7f19be40a8bd68df3887367c20fd6ec222fa1b6b9e9fd45a86865339755802"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.789679 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" podStartSLOduration=126.789662607 podStartE2EDuration="2m6.789662607s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:19.783432589 +0000 UTC m=+154.444535827" watchObservedRunningTime="2026-02-19 13:12:19.789662607 +0000 UTC m=+154.450765835" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.803175 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.803683 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-utilities\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.803729 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6n7\" (UniqueName: \"kubernetes.io/projected/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-kube-api-access-pm6n7\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.803793 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-catalog-content\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.804256 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j97fz"] Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.804610 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.304580488 +0000 UTC m=+154.965683716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.805608 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.822540 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:19 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:19 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:19 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.823328 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.827768 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" event={"ID":"25efe6ba-5133-429c-9b89-63bdc857d930","Type":"ContainerStarted","Data":"1b51ba8321b6ef24c49b9c286b61e28cee10ac4fadf5153db0ff406e2fa3ac22"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.854973 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j97fz"] Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.878292 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-szdk4" event={"ID":"a90d3758-749b-4327-877f-ecb89c49b5e0","Type":"ContainerStarted","Data":"7f18f82742716a5ceda777a94b552a6c7dfa048ddf75c3dee86facde8823f6dd"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.901081 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" event={"ID":"6502d829-f644-4207-823f-2ca6a0d682aa","Type":"ContainerStarted","Data":"7347b0eb8f2a3cfe590a5ec2afd3893d962acff845aad6e69fd5809468bb09c3"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.910352 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" event={"ID":"eb6e1d5f-e904-4931-9d35-e1b4b1a30361","Type":"ContainerStarted","Data":"7a6c7aa1e75c47b28f099eee9e8116ef4226526629a7b42b588cf4ca85cb0311"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.910403 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" event={"ID":"eb6e1d5f-e904-4931-9d35-e1b4b1a30361","Type":"ContainerStarted","Data":"78986461a051a87d252383a08897da91505933ae683f28ad40128f8f672c42cb"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911743 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-catalog-content\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttspt\" (UniqueName: \"kubernetes.io/projected/da26b508-7a00-4494-afc8-3da8b16eeaa7-kube-api-access-ttspt\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911832 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-utilities\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911873 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-catalog-content\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911906 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911934 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-utilities\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.911959 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6n7\" (UniqueName: \"kubernetes.io/projected/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-kube-api-access-pm6n7\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.913405 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-catalog-content\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: E0219 13:12:19.918986 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.418971264 +0000 UTC m=+155.080074492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.919286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-utilities\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.940077 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e58decb9959e4805f7b314b4ac651790c02b0cb557f0928a4f8a1d22d475df31"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.946384 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cd40deba6f7d28a7deaabb4b334ab92adf3b34115ae392df23b72793bb134889"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.947062 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.948437 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" event={"ID":"05d18d0b-96e1-4df3-a714-c30515624398","Type":"ContainerStarted","Data":"a0509c7c15fcdbcc83f08db7b78d197330a88bfde90145680ae63b5f19a7584f"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.950203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"01327bb9539a10d4f31e2eb530c16143ac0ccaf6be5d3ed1548c3133e64aa268"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.951403 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" event={"ID":"4fb02ce1-52b1-451d-8e75-25a71830b204","Type":"ContainerStarted","Data":"6855023b60dbf9d4d066c536075d0b84ea9e5e422c3ffe0ab66e972c22dddae1"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.952100 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.953635 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" event={"ID":"60256506-33fa-4551-bbb5-851b8679cf93","Type":"ContainerStarted","Data":"da40e463a989c47170175f175f9313fb8426e17be1f176177acf541034b010ac"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.954269 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.960872 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cjxr" podStartSLOduration=126.960854418 podStartE2EDuration="2m6.960854418s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:19.952402832 +0000 UTC m=+154.613506060" watchObservedRunningTime="2026-02-19 13:12:19.960854418 +0000 UTC m=+154.621957646" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.965833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" event={"ID":"1c5fdc2b-06f7-416e-9c27-058bd2b8a7eb","Type":"ContainerStarted","Data":"3c87ea19b8d544f3987a4efa2ccf20a016830c2c46f6ef1885d9eb7994c60379"} Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.981135 4861 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-znsxg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.981192 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" podUID="60256506-33fa-4551-bbb5-851b8679cf93" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.981470 4861 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-fjhdc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.981487 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" podUID="4fb02ce1-52b1-451d-8e75-25a71830b204" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 19 13:12:19 crc kubenswrapper[4861]: I0219 13:12:19.989823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6n7\" (UniqueName: \"kubernetes.io/projected/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-kube-api-access-pm6n7\") pod \"certified-operators-khrzg\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.019171 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-48755" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.021911 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.022222 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-catalog-content\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.022365 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttspt\" (UniqueName: \"kubernetes.io/projected/da26b508-7a00-4494-afc8-3da8b16eeaa7-kube-api-access-ttspt\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.022443 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-utilities\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.023506 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.5234875 +0000 UTC m=+155.184590728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.024211 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-catalog-content\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.024327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-utilities\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.043317 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.072232 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttspt\" (UniqueName: \"kubernetes.io/projected/da26b508-7a00-4494-afc8-3da8b16eeaa7-kube-api-access-ttspt\") pod \"certified-operators-j97fz\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.080911 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" podStartSLOduration=127.080890504 podStartE2EDuration="2m7.080890504s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:20.022399947 +0000 UTC m=+154.683503175" watchObservedRunningTime="2026-02-19 13:12:20.080890504 +0000 UTC m=+154.741993732" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.131016 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.141728 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.641709531 +0000 UTC m=+155.302812829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.151502 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.162883 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-frkp7" podStartSLOduration=127.16285749 podStartE2EDuration="2m7.16285749s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:20.118064717 +0000 UTC m=+154.779167935" watchObservedRunningTime="2026-02-19 13:12:20.16285749 +0000 UTC m=+154.823960718" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.235370 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.235702 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.735651569 +0000 UTC m=+155.396754797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.235772 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.236283 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.736264787 +0000 UTC m=+155.397368015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.291727 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gglfz" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.293561 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" podStartSLOduration=127.293545107 podStartE2EDuration="2m7.293545107s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:20.254870129 +0000 UTC m=+154.915973357" watchObservedRunningTime="2026-02-19 13:12:20.293545107 +0000 UTC m=+154.954648335" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.353975 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dgcks" podStartSLOduration=127.353953652 podStartE2EDuration="2m7.353953652s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:20.291962559 +0000 UTC m=+154.953065787" watchObservedRunningTime="2026-02-19 13:12:20.353953652 +0000 UTC m=+155.015056880" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.416787 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.417505 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:20.917484481 +0000 UTC m=+155.578587709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.523475 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.523815 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.023804112 +0000 UTC m=+155.684907340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.625960 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.626666 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.126650159 +0000 UTC m=+155.787753387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.636595 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5nqk"] Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.733170 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.733492 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.233478906 +0000 UTC m=+155.894582134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.825860 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:20 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:20 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:20 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.825912 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:20 crc kubenswrapper[4861]: I0219 13:12:20.835967 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:20 crc kubenswrapper[4861]: E0219 13:12:20.836300 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.336281981 +0000 UTC m=+155.997385209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:20.937575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:20.938497 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.438479908 +0000 UTC m=+156.099583136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.000023 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8qvw"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.006715 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" event={"ID":"25efe6ba-5133-429c-9b89-63bdc857d930","Type":"ContainerStarted","Data":"9540a5155b3727b9bc040559d3f6a0ea51aa6b0fc0afebe99112338906f4fdad"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.042106 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.042968 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.542939543 +0000 UTC m=+156.204042771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.048097 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" event={"ID":"6502d829-f644-4207-823f-2ca6a0d682aa","Type":"ContainerStarted","Data":"0394bd51a047f340806a14ec9512c3e4d72bf26a2b7396deb3e579a591e15659"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.069321 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9wmt2" event={"ID":"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8","Type":"ContainerStarted","Data":"be0fbc8511bde48c345c3aaaa44e508426fbd7672904df8d72f1d9ca6c8e443b"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.069665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.078137 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" event={"ID":"52ec89ec-1759-416b-87c8-b90b4194a960","Type":"ContainerStarted","Data":"329518ea02fc67a221d43f627c88d5809b7318229efd62e6d37f3fa21f46dd3a"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.085549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5bd2fd9bc1ad55dab07d704cf05a2cf5529a62a040fbed3bc714eaf9e5fa894d"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.098988 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"44707d97c24ea7532f2a195efeae9bacaecd06103d3452c6d57ce559de8fa68f"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.108310 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerStarted","Data":"3782f300b7a3799e9277e49c30b61bb88ef1eddfeb472783a4930829364e7f5c"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.127281 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d27vq" podStartSLOduration=128.12726563 podStartE2EDuration="2m8.12726563s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:21.126744925 +0000 UTC m=+155.787848153" watchObservedRunningTime="2026-02-19 13:12:21.12726563 +0000 UTC m=+155.788368858" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.127996 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"95bfc936678cb3cd161a41b9de247d53ccba4609af40df7270cebcdfa83ec41d"} Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.138578 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qwjzc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.138640 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.154074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.154443 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fjhdc" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.156237 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.656222225 +0000 UTC m=+156.317325453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.258844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.259659 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.759626879 +0000 UTC m=+156.420730097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.260535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.283250 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.783213701 +0000 UTC m=+156.444316929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.329469 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9wmt2" podStartSLOduration=9.329450588 podStartE2EDuration="9.329450588s" podCreationTimestamp="2026-02-19 13:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:21.306145484 +0000 UTC m=+155.967248712" watchObservedRunningTime="2026-02-19 13:12:21.329450588 +0000 UTC m=+155.990553816" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.344475 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.345214 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.348041 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.354516 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.354753 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.364928 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.365355 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.865339842 +0000 UTC m=+156.526443070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.371083 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbx6" podStartSLOduration=128.371044775 podStartE2EDuration="2m8.371044775s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:21.358943899 +0000 UTC m=+156.020047127" watchObservedRunningTime="2026-02-19 13:12:21.371044775 +0000 UTC m=+156.032148013" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.415924 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2lhzw"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.439704 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.440527 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" podStartSLOduration=128.440505112 podStartE2EDuration="2m8.440505112s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:21.405080213 +0000 UTC m=+156.066183441" watchObservedRunningTime="2026-02-19 13:12:21.440505112 +0000 UTC m=+156.101608340" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.444082 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.461369 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khrzg"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.466764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.466816 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-utilities\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.466871 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-catalog-content\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.466894 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f37e1d5-e255-444d-b854-c6032ee1e923-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.466935 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlzc\" (UniqueName: \"kubernetes.io/projected/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-kube-api-access-wtlzc\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.466965 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f37e1d5-e255-444d-b854-c6032ee1e923-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.467310 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:21.967292841 +0000 UTC m=+156.628396069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.475876 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lhzw"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.573799 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574062 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-utilities\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-catalog-content\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574158 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f37e1d5-e255-444d-b854-c6032ee1e923-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574193 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlzc\" (UniqueName: \"kubernetes.io/projected/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-kube-api-access-wtlzc\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f37e1d5-e255-444d-b854-c6032ee1e923-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.574576 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.074561572 +0000 UTC m=+156.735664800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574604 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f37e1d5-e255-444d-b854-c6032ee1e923-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.574935 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-catalog-content\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.575026 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-utilities\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.606617 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-znsxg" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.613841 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f37e1d5-e255-444d-b854-c6032ee1e923-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.644703 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlzc\" (UniqueName: \"kubernetes.io/projected/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-kube-api-access-wtlzc\") pod \"redhat-marketplace-2lhzw\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.652678 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j97fz"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.676370 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.676683 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.176669316 +0000 UTC m=+156.837772544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.697683 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.780734 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.781127 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.28111143 +0000 UTC m=+156.942214658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.791756 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.793588 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zv9zt"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.794567 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.815052 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv9zt"] Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.824127 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:21 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:21 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:21 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.824176 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.882017 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-utilities\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.882281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.882327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qst6\" (UniqueName: \"kubernetes.io/projected/5db4ca60-2b1f-4d62-a217-890f7be1e863-kube-api-access-2qst6\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.882350 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-catalog-content\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.882626 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.382614697 +0000 UTC m=+157.043717925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.982626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.983106 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.483080142 +0000 UTC m=+157.144183370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.983478 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-utilities\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.983512 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.983566 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qst6\" (UniqueName: \"kubernetes.io/projected/5db4ca60-2b1f-4d62-a217-890f7be1e863-kube-api-access-2qst6\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.983589 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-catalog-content\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.983987 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-catalog-content\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: I0219 13:12:21.985047 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-utilities\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:21 crc kubenswrapper[4861]: E0219 13:12:21.985275 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.485265827 +0000 UTC m=+157.146369055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.028537 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qst6\" (UniqueName: \"kubernetes.io/projected/5db4ca60-2b1f-4d62-a217-890f7be1e863-kube-api-access-2qst6\") pod \"redhat-marketplace-zv9zt\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.084233 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.084447 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.584390912 +0000 UTC m=+157.245494140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.133515 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.139972 4861 generic.go:334] "Generic (PLEG): container finished" podID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerID="2746feb52f8ae30acbb30333b7a9ddbccd1aed44f689a5fb598be19a6b2d2c3a" exitCode=0 Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.140037 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerDied","Data":"2746feb52f8ae30acbb30333b7a9ddbccd1aed44f689a5fb598be19a6b2d2c3a"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.145993 4861 generic.go:334] "Generic (PLEG): container finished" podID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerID="e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7" exitCode=0 Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.146101 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerDied","Data":"e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.146163 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerStarted","Data":"a292c9ed676c1c0dfa550bf1702507528b4378b85602cddeb76598fd10f56133"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.148146 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" event={"ID":"da029de4-0b02-44dd-ab8f-8b6f2f7f41af","Type":"ContainerStarted","Data":"d0c43fb42fa48350b93100d9c6e5a5253403a0390dc2a2da9691095672a27328"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.150911 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.172274 4861 generic.go:334] "Generic (PLEG): container finished" podID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerID="d8d0ae7b4e56b8f3ef7f3055161b3cc096a202c59bac544ca0327ec7f71b5169" exitCode=0 Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.172411 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khrzg" event={"ID":"0eb10abc-c209-4a6b-8fc8-39973ed75fd6","Type":"ContainerDied","Data":"d8d0ae7b4e56b8f3ef7f3055161b3cc096a202c59bac544ca0327ec7f71b5169"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.172494 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khrzg" event={"ID":"0eb10abc-c209-4a6b-8fc8-39973ed75fd6","Type":"ContainerStarted","Data":"507378f21944f35c930e4c582caeac3832fc17c6b733e684748cd0e964be6123"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.191225 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.191690 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.691671252 +0000 UTC m=+157.352774670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.202040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9wmt2" event={"ID":"f9dd0952-ba1e-4f66-aa15-dfa87db27fd8","Type":"ContainerStarted","Data":"ecab43067577dedf3a05ea768b118eaad6d5fd9c9fa958bb9ca714e5d8a22cd1"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.217661 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" event={"ID":"52ec89ec-1759-416b-87c8-b90b4194a960","Type":"ContainerStarted","Data":"5a0c83eecaa1e1d521eda8e177847c6d5bb40fb4847316c557c6f46b601af1f0"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.222412 4861 generic.go:334] "Generic (PLEG): container finished" podID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerID="c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8" exitCode=0 Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.222930 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qvw" event={"ID":"26c3ab3c-b007-48bf-9267-1be0df74a551","Type":"ContainerDied","Data":"c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.222981 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qvw" event={"ID":"26c3ab3c-b007-48bf-9267-1be0df74a551","Type":"ContainerStarted","Data":"3ffdb97756be56a0d453482a4f3bff1a7a029d6a52ef02d1e9834aa5f5300173"} Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.223816 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qwjzc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.223854 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.292342 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.292760 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.792738845 +0000 UTC m=+157.453842073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.294239 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.302128 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.802111948 +0000 UTC m=+157.463215176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.381532 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kcqf7"] Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.385858 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.389963 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.397073 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.397241 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-catalog-content\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.397288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wbr\" (UniqueName: \"kubernetes.io/projected/617be892-2391-43d5-94d0-c0600d0c66a0-kube-api-access-c2wbr\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.397328 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.897309544 +0000 UTC m=+157.558412782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.397351 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-utilities\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.397391 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.397688 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:22.897677525 +0000 UTC m=+157.558780823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.413796 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcqf7"] Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.479571 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 13:12:22 crc kubenswrapper[4861]: W0219 13:12:22.489256 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2f37e1d5_e255_444d_b854_c6032ee1e923.slice/crio-56cf6f7f6671310074cb1e396080f9c31b0874977d56839969f3d9f4518a5d8b WatchSource:0}: Error finding container 56cf6f7f6671310074cb1e396080f9c31b0874977d56839969f3d9f4518a5d8b: Status 404 returned error can't find the container with id 56cf6f7f6671310074cb1e396080f9c31b0874977d56839969f3d9f4518a5d8b Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.499182 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.499446 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-utilities\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.499504 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-catalog-content\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.499535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wbr\" (UniqueName: \"kubernetes.io/projected/617be892-2391-43d5-94d0-c0600d0c66a0-kube-api-access-c2wbr\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.500993 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.000964725 +0000 UTC m=+157.662067993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.504334 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-utilities\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.504601 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-catalog-content\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.528886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wbr\" (UniqueName: \"kubernetes.io/projected/617be892-2391-43d5-94d0-c0600d0c66a0-kube-api-access-c2wbr\") pod \"redhat-operators-kcqf7\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.569068 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lhzw"] Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.601116 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.601440 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.101411458 +0000 UTC m=+157.762514686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.632379 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4cdd9" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.702289 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.702625 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.202601665 +0000 UTC m=+157.863704893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.718694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv9zt"] Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.745529 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.784707 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-plx4l"] Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.786342 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.803809 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plx4l"] Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.804591 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.805026 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.305013769 +0000 UTC m=+157.966116997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.818984 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:22 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:22 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:22 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.819279 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.907840 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.908195 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-utilities\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.908230 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmql\" (UniqueName: \"kubernetes.io/projected/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-kube-api-access-mmmql\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:22 crc kubenswrapper[4861]: I0219 13:12:22.908266 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-catalog-content\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:22 crc kubenswrapper[4861]: E0219 13:12:22.908408 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.408395242 +0000 UTC m=+158.069498470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.019647 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcqf7"] Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.020275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-utilities\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.020401 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmql\" (UniqueName: \"kubernetes.io/projected/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-kube-api-access-mmmql\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.020529 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-catalog-content\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.020620 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.022006 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.521991283 +0000 UTC m=+158.183094511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.022693 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-catalog-content\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.025624 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-utilities\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.072616 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmql\" (UniqueName: \"kubernetes.io/projected/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-kube-api-access-mmmql\") pod \"redhat-operators-plx4l\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.112567 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.121843 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.122290 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.622274461 +0000 UTC m=+158.283377689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.224548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.225006 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.724972914 +0000 UTC m=+158.386076142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.248385 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" event={"ID":"da029de4-0b02-44dd-ab8f-8b6f2f7f41af","Type":"ContainerStarted","Data":"dd1376f243ff67f972f94bd4c3afdb159a8f7610ae1558948efd8228dd9739ee"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.257324 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcqf7" event={"ID":"617be892-2391-43d5-94d0-c0600d0c66a0","Type":"ContainerStarted","Data":"599297806b3e8d5004dc2159028d53fd2253b1d7ed82de00caa594193406cbf2"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.263973 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerID="74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0" exitCode=0 Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.264061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerDied","Data":"74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.264095 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerStarted","Data":"2456017c112249437718dd4f13bd8dc04385bdd1239ff75ee2f66af909c3ba24"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.266212 4861 generic.go:334] "Generic (PLEG): container finished" podID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerID="7c41808a278dac682d727e34db5d7a6f40d4c10eadeea5c09ce79145019c043f" exitCode=0 Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.266305 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lhzw" event={"ID":"bafa1b1c-66a3-42f6-8a14-4a272b2ac176","Type":"ContainerDied","Data":"7c41808a278dac682d727e34db5d7a6f40d4c10eadeea5c09ce79145019c043f"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.266356 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lhzw" event={"ID":"bafa1b1c-66a3-42f6-8a14-4a272b2ac176","Type":"ContainerStarted","Data":"050df1f70a5de4022c4651921835fbb1a7335a818fc23bd7336b959ca4cc9cd3"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.281916 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2f37e1d5-e255-444d-b854-c6032ee1e923","Type":"ContainerStarted","Data":"acc2003f1b808c153fa6d34774d257edac845a6bf535e4ae5ec28bc0f9b3fd98"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.282000 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2f37e1d5-e255-444d-b854-c6032ee1e923","Type":"ContainerStarted","Data":"56cf6f7f6671310074cb1e396080f9c31b0874977d56839969f3d9f4518a5d8b"} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.337968 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.340992 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.840974648 +0000 UTC m=+158.502077876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.341644 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.346019 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.84600803 +0000 UTC m=+158.507111258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.391188 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.391169874 podStartE2EDuration="2.391169874s" podCreationTimestamp="2026-02-19 13:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:23.333701268 +0000 UTC m=+157.994804496" watchObservedRunningTime="2026-02-19 13:12:23.391169874 +0000 UTC m=+158.052273102" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.445446 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.447010 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.94698889 +0000 UTC m=+158.608092128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.447564 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.448392 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:23.948381342 +0000 UTC m=+158.609484580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.459195 4861 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.548166 4861 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T13:12:23.4592279Z","Handler":null,"Name":""} Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.548731 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.549225 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 13:12:24.049208997 +0000 UTC m=+158.710312225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.549558 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: E0219 13:12:23.549979 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 13:12:24.049968211 +0000 UTC m=+158.711071439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6mmf" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.555120 4861 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.555158 4861 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.653753 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.663004 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.665552 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plx4l"] Feb 19 13:12:23 crc kubenswrapper[4861]: W0219 13:12:23.701473 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685e7fb8_e8da_4f5a_87c5_424d1e12a6be.slice/crio-f467fe7c8030d2e6118c230ac89cc9c9d8ef114e609b49c6f8f793d21de04fb5 WatchSource:0}: Error finding container f467fe7c8030d2e6118c230ac89cc9c9d8ef114e609b49c6f8f793d21de04fb5: Status 404 returned error can't find the container with id f467fe7c8030d2e6118c230ac89cc9c9d8ef114e609b49c6f8f793d21de04fb5 Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.761055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.764122 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.764158 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.812440 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:23 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:23 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:23 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.812486 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:23 crc kubenswrapper[4861]: I0219 13:12:23.817320 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6mmf\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.004256 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.046875 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.290979 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.291400 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.295231 4861 patch_prober.go:28] interesting pod/console-f9d7485db-4bs6h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.295291 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bs6h" podUID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.322774 4861 generic.go:334] "Generic (PLEG): container finished" podID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerID="a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5" exitCode=0 Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.322918 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerDied","Data":"a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5"} Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.323402 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerStarted","Data":"f467fe7c8030d2e6118c230ac89cc9c9d8ef114e609b49c6f8f793d21de04fb5"} Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.345127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" event={"ID":"da029de4-0b02-44dd-ab8f-8b6f2f7f41af","Type":"ContainerStarted","Data":"81978895102b741bcce640289397c9c0e1a49dde44a4f8767d44c11cc6559232"} Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.349272 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f37e1d5-e255-444d-b854-c6032ee1e923" containerID="acc2003f1b808c153fa6d34774d257edac845a6bf535e4ae5ec28bc0f9b3fd98" exitCode=0 Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.349342 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2f37e1d5-e255-444d-b854-c6032ee1e923","Type":"ContainerDied","Data":"acc2003f1b808c153fa6d34774d257edac845a6bf535e4ae5ec28bc0f9b3fd98"} Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.397096 4861 generic.go:334] "Generic (PLEG): container finished" podID="25018c53-6299-4fab-bf5e-819ba4f84596" containerID="4f7f19be40a8bd68df3887367c20fd6ec222fa1b6b9e9fd45a86865339755802" exitCode=0 Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.397190 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" event={"ID":"25018c53-6299-4fab-bf5e-819ba4f84596","Type":"ContainerDied","Data":"4f7f19be40a8bd68df3887367c20fd6ec222fa1b6b9e9fd45a86865339755802"} Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.419387 4861 generic.go:334] "Generic (PLEG): container finished" podID="617be892-2391-43d5-94d0-c0600d0c66a0" containerID="69554819c4587f2da76dad4b98850802cf2d6105002b88325d545bf45cc00c38" exitCode=0 Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.419586 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcqf7" event={"ID":"617be892-2391-43d5-94d0-c0600d0c66a0","Type":"ContainerDied","Data":"69554819c4587f2da76dad4b98850802cf2d6105002b88325d545bf45cc00c38"} Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.571124 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-rw85s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.571529 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rw85s" podUID="59847917-735f-49c7-99b2-599facec7e03" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.571185 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-rw85s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.571654 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rw85s" podUID="59847917-735f-49c7-99b2-599facec7e03" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.685024 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6mmf"] Feb 19 13:12:24 crc kubenswrapper[4861]: W0219 13:12:24.703223 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7425897a_821a_4293_8d9c_3b0c5744bbc9.slice/crio-7182a4cdd379db0322309694c791818fe73f7d80d412cb0109eec815221b874c WatchSource:0}: Error finding container 7182a4cdd379db0322309694c791818fe73f7d80d412cb0109eec815221b874c: Status 404 returned error can't find the container with id 7182a4cdd379db0322309694c791818fe73f7d80d412cb0109eec815221b874c Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.811179 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.815223 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:24 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:24 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:24 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.815272 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.998836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:24 crc kubenswrapper[4861]: I0219 13:12:24.998876 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.009953 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.438846 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" event={"ID":"da029de4-0b02-44dd-ab8f-8b6f2f7f41af","Type":"ContainerStarted","Data":"89ce4ea9f00944a56afc821cba6d36141d24487f095e36702ed503478e2adb9b"} Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.479147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" event={"ID":"7425897a-821a-4293-8d9c-3b0c5744bbc9","Type":"ContainerStarted","Data":"fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117"} Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.479203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" event={"ID":"7425897a-821a-4293-8d9c-3b0c5744bbc9","Type":"ContainerStarted","Data":"7182a4cdd379db0322309694c791818fe73f7d80d412cb0109eec815221b874c"} Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.479455 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.480272 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.496547 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lgkf4" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.504254 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mvbrc" podStartSLOduration=13.504224411 podStartE2EDuration="13.504224411s" podCreationTimestamp="2026-02-19 13:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:25.465265774 +0000 UTC m=+160.126369002" watchObservedRunningTime="2026-02-19 13:12:25.504224411 +0000 UTC m=+160.165327639" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.507473 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" podStartSLOduration=132.507460968 podStartE2EDuration="2m12.507460968s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:25.502828649 +0000 UTC m=+160.163931877" watchObservedRunningTime="2026-02-19 13:12:25.507460968 +0000 UTC m=+160.168564196" Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.847555 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:25 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:25 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:25 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:25 crc kubenswrapper[4861]: I0219 13:12:25.847732 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.109205 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.110437 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.119853 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.120398 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.131234 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.146232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.146302 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.148197 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.163959 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.248579 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.248840 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.252060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.272213 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.349707 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25018c53-6299-4fab-bf5e-819ba4f84596-secret-volume\") pod \"25018c53-6299-4fab-bf5e-819ba4f84596\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.350821 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25018c53-6299-4fab-bf5e-819ba4f84596-config-volume\") pod \"25018c53-6299-4fab-bf5e-819ba4f84596\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.350899 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f37e1d5-e255-444d-b854-c6032ee1e923-kube-api-access\") pod \"2f37e1d5-e255-444d-b854-c6032ee1e923\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.351535 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f37e1d5-e255-444d-b854-c6032ee1e923-kubelet-dir\") pod \"2f37e1d5-e255-444d-b854-c6032ee1e923\" (UID: \"2f37e1d5-e255-444d-b854-c6032ee1e923\") " Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.351624 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfmpq\" (UniqueName: \"kubernetes.io/projected/25018c53-6299-4fab-bf5e-819ba4f84596-kube-api-access-gfmpq\") pod \"25018c53-6299-4fab-bf5e-819ba4f84596\" (UID: \"25018c53-6299-4fab-bf5e-819ba4f84596\") " Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.351954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f37e1d5-e255-444d-b854-c6032ee1e923-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2f37e1d5-e255-444d-b854-c6032ee1e923" (UID: "2f37e1d5-e255-444d-b854-c6032ee1e923"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.352254 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25018c53-6299-4fab-bf5e-819ba4f84596-config-volume" (OuterVolumeSpecName: "config-volume") pod "25018c53-6299-4fab-bf5e-819ba4f84596" (UID: "25018c53-6299-4fab-bf5e-819ba4f84596"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.353184 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25018c53-6299-4fab-bf5e-819ba4f84596-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.353212 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f37e1d5-e255-444d-b854-c6032ee1e923-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.356434 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f37e1d5-e255-444d-b854-c6032ee1e923-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2f37e1d5-e255-444d-b854-c6032ee1e923" (UID: "2f37e1d5-e255-444d-b854-c6032ee1e923"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.356849 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25018c53-6299-4fab-bf5e-819ba4f84596-kube-api-access-gfmpq" (OuterVolumeSpecName: "kube-api-access-gfmpq") pod "25018c53-6299-4fab-bf5e-819ba4f84596" (UID: "25018c53-6299-4fab-bf5e-819ba4f84596"). InnerVolumeSpecName "kube-api-access-gfmpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.357342 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25018c53-6299-4fab-bf5e-819ba4f84596-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25018c53-6299-4fab-bf5e-819ba4f84596" (UID: "25018c53-6299-4fab-bf5e-819ba4f84596"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.456272 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f37e1d5-e255-444d-b854-c6032ee1e923-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.456324 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfmpq\" (UniqueName: \"kubernetes.io/projected/25018c53-6299-4fab-bf5e-819ba4f84596-kube-api-access-gfmpq\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.456340 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25018c53-6299-4fab-bf5e-819ba4f84596-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.462067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.492107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2f37e1d5-e255-444d-b854-c6032ee1e923","Type":"ContainerDied","Data":"56cf6f7f6671310074cb1e396080f9c31b0874977d56839969f3d9f4518a5d8b"} Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.492156 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56cf6f7f6671310074cb1e396080f9c31b0874977d56839969f3d9f4518a5d8b" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.492228 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.498779 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" event={"ID":"25018c53-6299-4fab-bf5e-819ba4f84596","Type":"ContainerDied","Data":"fe714fd141abc5387909a03ac432a2e0ee725356e3a1d0e540f49899ecc41f3c"} Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.498839 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.498865 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe714fd141abc5387909a03ac432a2e0ee725356e3a1d0e540f49899ecc41f3c" Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.825600 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:26 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:26 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:26 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:26 crc kubenswrapper[4861]: I0219 13:12:26.825715 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:27 crc kubenswrapper[4861]: I0219 13:12:27.164393 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 13:12:27 crc kubenswrapper[4861]: I0219 13:12:27.520466 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8f68a8a-1b7c-44c0-8437-0c811fc04950","Type":"ContainerStarted","Data":"2a5ab9ac91d4fdbeb4d564f716a109a0bd0878c6afc9332752e4e3acaf6d4209"} Feb 19 13:12:27 crc kubenswrapper[4861]: I0219 13:12:27.811667 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:27 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:27 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:27 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:27 crc kubenswrapper[4861]: I0219 13:12:27.811751 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:28 crc kubenswrapper[4861]: I0219 13:12:28.544260 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8f68a8a-1b7c-44c0-8437-0c811fc04950","Type":"ContainerStarted","Data":"ecca1ee32dcf7833fe80dc7bf8e986c05093aaab5da57faf52ac34b7edf37aa5"} Feb 19 13:12:28 crc kubenswrapper[4861]: I0219 13:12:28.562688 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.562665693 podStartE2EDuration="2.562665693s" podCreationTimestamp="2026-02-19 13:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:28.562186719 +0000 UTC m=+163.223289947" watchObservedRunningTime="2026-02-19 13:12:28.562665693 +0000 UTC m=+163.223768931" Feb 19 13:12:28 crc kubenswrapper[4861]: I0219 13:12:28.811686 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:28 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:28 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:28 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:28 crc kubenswrapper[4861]: I0219 13:12:28.811752 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:29 crc kubenswrapper[4861]: I0219 13:12:29.564618 4861 generic.go:334] "Generic (PLEG): container finished" podID="c8f68a8a-1b7c-44c0-8437-0c811fc04950" containerID="ecca1ee32dcf7833fe80dc7bf8e986c05093aaab5da57faf52ac34b7edf37aa5" exitCode=0 Feb 19 13:12:29 crc kubenswrapper[4861]: I0219 13:12:29.564749 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8f68a8a-1b7c-44c0-8437-0c811fc04950","Type":"ContainerDied","Data":"ecca1ee32dcf7833fe80dc7bf8e986c05093aaab5da57faf52ac34b7edf37aa5"} Feb 19 13:12:29 crc kubenswrapper[4861]: I0219 13:12:29.817924 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:29 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Feb 19 13:12:29 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:29 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:29 crc kubenswrapper[4861]: I0219 13:12:29.818339 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:30 crc kubenswrapper[4861]: I0219 13:12:30.522852 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9wmt2" Feb 19 13:12:30 crc kubenswrapper[4861]: I0219 13:12:30.810924 4861 patch_prober.go:28] interesting pod/router-default-5444994796-9zmzh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 13:12:30 crc kubenswrapper[4861]: [+]has-synced ok Feb 19 13:12:30 crc kubenswrapper[4861]: [+]process-running ok Feb 19 13:12:30 crc kubenswrapper[4861]: healthz check failed Feb 19 13:12:30 crc kubenswrapper[4861]: I0219 13:12:30.810984 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9zmzh" podUID="ed3e037e-4dc7-4240-b54f-20931407f4a3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 13:12:31 crc kubenswrapper[4861]: I0219 13:12:31.811080 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:31 crc kubenswrapper[4861]: I0219 13:12:31.816224 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9zmzh" Feb 19 13:12:33 crc kubenswrapper[4861]: I0219 13:12:33.835066 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:12:33 crc kubenswrapper[4861]: I0219 13:12:33.835132 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:12:34 crc kubenswrapper[4861]: I0219 13:12:34.306440 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:34 crc kubenswrapper[4861]: I0219 13:12:34.316649 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:12:34 crc kubenswrapper[4861]: I0219 13:12:34.580877 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rw85s" Feb 19 13:12:35 crc kubenswrapper[4861]: I0219 13:12:35.870927 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:35 crc kubenswrapper[4861]: I0219 13:12:35.879374 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/163fc0e2-f792-4062-88a7-3ed764a08103-metrics-certs\") pod \"network-metrics-daemon-kjwt5\" (UID: \"163fc0e2-f792-4062-88a7-3ed764a08103\") " pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:35 crc kubenswrapper[4861]: I0219 13:12:35.904650 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kjwt5" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.334655 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.377221 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kubelet-dir\") pod \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.377374 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kube-api-access\") pod \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\" (UID: \"c8f68a8a-1b7c-44c0-8437-0c811fc04950\") " Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.377384 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8f68a8a-1b7c-44c0-8437-0c811fc04950" (UID: "c8f68a8a-1b7c-44c0-8437-0c811fc04950"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.377963 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.381211 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8f68a8a-1b7c-44c0-8437-0c811fc04950" (UID: "c8f68a8a-1b7c-44c0-8437-0c811fc04950"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.478922 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f68a8a-1b7c-44c0-8437-0c811fc04950-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.649324 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8f68a8a-1b7c-44c0-8437-0c811fc04950","Type":"ContainerDied","Data":"2a5ab9ac91d4fdbeb4d564f716a109a0bd0878c6afc9332752e4e3acaf6d4209"} Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.649755 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 13:12:36 crc kubenswrapper[4861]: I0219 13:12:36.649790 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5ab9ac91d4fdbeb4d564f716a109a0bd0878c6afc9332752e4e3acaf6d4209" Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.082783 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2d8vs"] Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.083119 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" podUID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" containerName="controller-manager" containerID="cri-o://b32e443302f57964b8b4f51d67f5d610ecb7bb4396e50d00d725e3e75e233905" gracePeriod=30 Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.086030 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8"] Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.086368 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" podUID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" containerName="route-controller-manager" containerID="cri-o://5001fa248991aa88551b064bc544b7fcbded844ee0b56168dac1685d46b4716c" gracePeriod=30 Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.655665 4861 generic.go:334] "Generic (PLEG): container finished" podID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" containerID="5001fa248991aa88551b064bc544b7fcbded844ee0b56168dac1685d46b4716c" exitCode=0 Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.655734 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" event={"ID":"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5","Type":"ContainerDied","Data":"5001fa248991aa88551b064bc544b7fcbded844ee0b56168dac1685d46b4716c"} Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.659064 4861 generic.go:334] "Generic (PLEG): container finished" podID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" containerID="b32e443302f57964b8b4f51d67f5d610ecb7bb4396e50d00d725e3e75e233905" exitCode=0 Feb 19 13:12:37 crc kubenswrapper[4861]: I0219 13:12:37.659144 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" event={"ID":"db2cfad6-b1c8-46ee-8f79-6072ffb59471","Type":"ContainerDied","Data":"b32e443302f57964b8b4f51d67f5d610ecb7bb4396e50d00d725e3e75e233905"} Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.122631 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kjwt5"] Feb 19 13:12:38 crc kubenswrapper[4861]: W0219 13:12:38.135112 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod163fc0e2_f792_4062_88a7_3ed764a08103.slice/crio-7c66170af15144fa0d981e6e317582ac9cdbab366aaefdac3821fd7da7ea8dba WatchSource:0}: Error finding container 7c66170af15144fa0d981e6e317582ac9cdbab366aaefdac3821fd7da7ea8dba: Status 404 returned error can't find the container with id 7c66170af15144fa0d981e6e317582ac9cdbab366aaefdac3821fd7da7ea8dba Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.560950 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.608767 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq"] Feb 19 13:12:38 crc kubenswrapper[4861]: E0219 13:12:38.609103 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f37e1d5-e255-444d-b854-c6032ee1e923" containerName="pruner" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.609123 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f37e1d5-e255-444d-b854-c6032ee1e923" containerName="pruner" Feb 19 13:12:38 crc kubenswrapper[4861]: E0219 13:12:38.609142 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f68a8a-1b7c-44c0-8437-0c811fc04950" containerName="pruner" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.610184 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f68a8a-1b7c-44c0-8437-0c811fc04950" containerName="pruner" Feb 19 13:12:38 crc kubenswrapper[4861]: E0219 13:12:38.610213 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" containerName="route-controller-manager" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.610224 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" containerName="route-controller-manager" Feb 19 13:12:38 crc kubenswrapper[4861]: E0219 13:12:38.610252 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25018c53-6299-4fab-bf5e-819ba4f84596" containerName="collect-profiles" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.611270 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="25018c53-6299-4fab-bf5e-819ba4f84596" containerName="collect-profiles" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.611805 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="25018c53-6299-4fab-bf5e-819ba4f84596" containerName="collect-profiles" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.611840 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f68a8a-1b7c-44c0-8437-0c811fc04950" containerName="pruner" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.611852 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" containerName="route-controller-manager" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.611870 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f37e1d5-e255-444d-b854-c6032ee1e923" containerName="pruner" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.612476 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.619763 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-serving-cert\") pod \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.619854 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dklk\" (UniqueName: \"kubernetes.io/projected/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-kube-api-access-8dklk\") pod \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.619889 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-client-ca\") pod \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.620023 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-config\") pod \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\" (UID: \"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.623551 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-client-ca" (OuterVolumeSpecName: "client-ca") pod "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" (UID: "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.624544 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-config" (OuterVolumeSpecName: "config") pod "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" (UID: "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.629317 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-config\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.629734 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8qt\" (UniqueName: \"kubernetes.io/projected/9852d40c-c1c2-4fee-91ae-fc3b521186e3-kube-api-access-dt8qt\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.629727 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-kube-api-access-8dklk" (OuterVolumeSpecName: "kube-api-access-8dklk") pod "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" (UID: "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5"). InnerVolumeSpecName "kube-api-access-8dklk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.629954 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-client-ca\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.630034 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9852d40c-c1c2-4fee-91ae-fc3b521186e3-serving-cert\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.630132 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dklk\" (UniqueName: \"kubernetes.io/projected/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-kube-api-access-8dklk\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.630150 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.630160 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.632210 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq"] Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.635810 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" (UID: "186eb49d-44fd-4ea5-bd44-c6cf83a03fc5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.666693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" event={"ID":"186eb49d-44fd-4ea5-bd44-c6cf83a03fc5","Type":"ContainerDied","Data":"db2def281b1a90bf549feb13e850c9ac987aeaceb93ee34bac0738d9a528600f"} Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.666740 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.666767 4861 scope.go:117] "RemoveContainer" containerID="5001fa248991aa88551b064bc544b7fcbded844ee0b56168dac1685d46b4716c" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.668174 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" event={"ID":"163fc0e2-f792-4062-88a7-3ed764a08103","Type":"ContainerStarted","Data":"7c66170af15144fa0d981e6e317582ac9cdbab366aaefdac3821fd7da7ea8dba"} Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.680801 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.730936 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-proxy-ca-bundles\") pod \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731012 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-client-ca\") pod \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731031 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-config\") pod \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731057 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n48gh\" (UniqueName: \"kubernetes.io/projected/db2cfad6-b1c8-46ee-8f79-6072ffb59471-kube-api-access-n48gh\") pod \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731086 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2cfad6-b1c8-46ee-8f79-6072ffb59471-serving-cert\") pod \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\" (UID: \"db2cfad6-b1c8-46ee-8f79-6072ffb59471\") " Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731219 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8qt\" (UniqueName: \"kubernetes.io/projected/9852d40c-c1c2-4fee-91ae-fc3b521186e3-kube-api-access-dt8qt\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-client-ca\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731266 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9852d40c-c1c2-4fee-91ae-fc3b521186e3-serving-cert\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-config\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.731354 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.732669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-config\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.733259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db2cfad6-b1c8-46ee-8f79-6072ffb59471" (UID: "db2cfad6-b1c8-46ee-8f79-6072ffb59471"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.733344 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-client-ca" (OuterVolumeSpecName: "client-ca") pod "db2cfad6-b1c8-46ee-8f79-6072ffb59471" (UID: "db2cfad6-b1c8-46ee-8f79-6072ffb59471"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.733880 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-config" (OuterVolumeSpecName: "config") pod "db2cfad6-b1c8-46ee-8f79-6072ffb59471" (UID: "db2cfad6-b1c8-46ee-8f79-6072ffb59471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.734487 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-client-ca\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.742118 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8"] Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.743085 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9852d40c-c1c2-4fee-91ae-fc3b521186e3-serving-cert\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.745740 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-45sw8"] Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.753527 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8qt\" (UniqueName: \"kubernetes.io/projected/9852d40c-c1c2-4fee-91ae-fc3b521186e3-kube-api-access-dt8qt\") pod \"route-controller-manager-5444d75749-bt6dq\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.765582 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2cfad6-b1c8-46ee-8f79-6072ffb59471-kube-api-access-n48gh" (OuterVolumeSpecName: "kube-api-access-n48gh") pod "db2cfad6-b1c8-46ee-8f79-6072ffb59471" (UID: "db2cfad6-b1c8-46ee-8f79-6072ffb59471"). InnerVolumeSpecName "kube-api-access-n48gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.765587 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2cfad6-b1c8-46ee-8f79-6072ffb59471-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db2cfad6-b1c8-46ee-8f79-6072ffb59471" (UID: "db2cfad6-b1c8-46ee-8f79-6072ffb59471"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.832366 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.832433 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.832449 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2cfad6-b1c8-46ee-8f79-6072ffb59471-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.832464 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n48gh\" (UniqueName: \"kubernetes.io/projected/db2cfad6-b1c8-46ee-8f79-6072ffb59471-kube-api-access-n48gh\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.832480 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2cfad6-b1c8-46ee-8f79-6072ffb59471-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:38 crc kubenswrapper[4861]: I0219 13:12:38.928676 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.142186 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq"] Feb 19 13:12:39 crc kubenswrapper[4861]: W0219 13:12:39.147637 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9852d40c_c1c2_4fee_91ae_fc3b521186e3.slice/crio-8d9254779d8c3b114455a617fae0091e7a81d3280d86158c6802de7159c50592 WatchSource:0}: Error finding container 8d9254779d8c3b114455a617fae0091e7a81d3280d86158c6802de7159c50592: Status 404 returned error can't find the container with id 8d9254779d8c3b114455a617fae0091e7a81d3280d86158c6802de7159c50592 Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.675884 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" event={"ID":"163fc0e2-f792-4062-88a7-3ed764a08103","Type":"ContainerStarted","Data":"e11e933a73e317aef577e863309eac081dbda942fbb775d46c18b5734cbaa45c"} Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.677837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" event={"ID":"9852d40c-c1c2-4fee-91ae-fc3b521186e3","Type":"ContainerStarted","Data":"4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b"} Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.677865 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" event={"ID":"9852d40c-c1c2-4fee-91ae-fc3b521186e3","Type":"ContainerStarted","Data":"8d9254779d8c3b114455a617fae0091e7a81d3280d86158c6802de7159c50592"} Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.681501 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" event={"ID":"db2cfad6-b1c8-46ee-8f79-6072ffb59471","Type":"ContainerDied","Data":"f5e204a03987a45fca802c9742ec0ef43bb8a949288d7ee05f37c3f4e4774e5a"} Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.681646 4861 scope.go:117] "RemoveContainer" containerID="b32e443302f57964b8b4f51d67f5d610ecb7bb4396e50d00d725e3e75e233905" Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.681598 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2d8vs" Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.733096 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2d8vs"] Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.743480 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2d8vs"] Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.984081 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186eb49d-44fd-4ea5-bd44-c6cf83a03fc5" path="/var/lib/kubelet/pods/186eb49d-44fd-4ea5-bd44-c6cf83a03fc5/volumes" Feb 19 13:12:39 crc kubenswrapper[4861]: I0219 13:12:39.985007 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" path="/var/lib/kubelet/pods/db2cfad6-b1c8-46ee-8f79-6072ffb59471/volumes" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.610070 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58d679fb6c-sbvgv"] Feb 19 13:12:40 crc kubenswrapper[4861]: E0219 13:12:40.610355 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" containerName="controller-manager" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.610368 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" containerName="controller-manager" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.610533 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2cfad6-b1c8-46ee-8f79-6072ffb59471" containerName="controller-manager" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.610988 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.614842 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.616244 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.617594 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.617803 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.618397 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.620651 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.624785 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58d679fb6c-sbvgv"] Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.625277 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.668780 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-proxy-ca-bundles\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.668836 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e6f5a-8278-42b6-987f-9a422d130d90-serving-cert\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.668870 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-config\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.668913 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf5f\" (UniqueName: \"kubernetes.io/projected/0a3e6f5a-8278-42b6-987f-9a422d130d90-kube-api-access-cdf5f\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.668934 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-client-ca\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.690606 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kjwt5" event={"ID":"163fc0e2-f792-4062-88a7-3ed764a08103","Type":"ContainerStarted","Data":"bf47bedb38981e3a7b38a26e16fcab61328b95c9f0e7dd0e88106e9b4cdec3c8"} Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.690893 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.699566 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.705703 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" podStartSLOduration=3.705685474 podStartE2EDuration="3.705685474s" podCreationTimestamp="2026-02-19 13:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:40.704193989 +0000 UTC m=+175.365297237" watchObservedRunningTime="2026-02-19 13:12:40.705685474 +0000 UTC m=+175.366788702" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.736531 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kjwt5" podStartSLOduration=147.736511295 podStartE2EDuration="2m27.736511295s" podCreationTimestamp="2026-02-19 13:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:40.733594637 +0000 UTC m=+175.394697865" watchObservedRunningTime="2026-02-19 13:12:40.736511295 +0000 UTC m=+175.397614523" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.769975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf5f\" (UniqueName: \"kubernetes.io/projected/0a3e6f5a-8278-42b6-987f-9a422d130d90-kube-api-access-cdf5f\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.770045 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-client-ca\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.770090 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-proxy-ca-bundles\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.770140 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e6f5a-8278-42b6-987f-9a422d130d90-serving-cert\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.770189 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-config\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.771919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-client-ca\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.773355 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-config\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.773367 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-proxy-ca-bundles\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.777213 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e6f5a-8278-42b6-987f-9a422d130d90-serving-cert\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.786286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf5f\" (UniqueName: \"kubernetes.io/projected/0a3e6f5a-8278-42b6-987f-9a422d130d90-kube-api-access-cdf5f\") pod \"controller-manager-58d679fb6c-sbvgv\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:40 crc kubenswrapper[4861]: I0219 13:12:40.973005 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:44 crc kubenswrapper[4861]: I0219 13:12:44.052135 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:12:51 crc kubenswrapper[4861]: E0219 13:12:51.880646 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 13:12:51 crc kubenswrapper[4861]: E0219 13:12:51.881293 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmmql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-plx4l_openshift-marketplace(685e7fb8-e8da-4f5a-87c5-424d1e12a6be): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 13:12:51 crc kubenswrapper[4861]: E0219 13:12:51.882521 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-plx4l" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" Feb 19 13:12:51 crc kubenswrapper[4861]: E0219 13:12:51.884370 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 13:12:51 crc kubenswrapper[4861]: E0219 13:12:51.884477 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2wbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kcqf7_openshift-marketplace(617be892-2391-43d5-94d0-c0600d0c66a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 13:12:51 crc kubenswrapper[4861]: E0219 13:12:51.885891 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kcqf7" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.241980 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-plx4l" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.242099 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kcqf7" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.336077 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.336241 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttspt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j97fz_openshift-marketplace(da26b508-7a00-4494-afc8-3da8b16eeaa7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.337747 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j97fz" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.397144 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.397773 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm6n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-khrzg_openshift-marketplace(0eb10abc-c209-4a6b-8fc8-39973ed75fd6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.400794 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-khrzg" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.490279 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58d679fb6c-sbvgv"] Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.760739 4861 generic.go:334] "Generic (PLEG): container finished" podID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerID="f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9" exitCode=0 Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.761077 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qvw" event={"ID":"26c3ab3c-b007-48bf-9267-1be0df74a551","Type":"ContainerDied","Data":"f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9"} Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.768520 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" event={"ID":"0a3e6f5a-8278-42b6-987f-9a422d130d90","Type":"ContainerStarted","Data":"ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11"} Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.768565 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" event={"ID":"0a3e6f5a-8278-42b6-987f-9a422d130d90","Type":"ContainerStarted","Data":"d612d26da648037c3bc144183d5a4b6adcf43ac6fa64d017b415eb41646bf72e"} Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.768960 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.770182 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerStarted","Data":"c6b780148cd892dd5ce810b54b251139570de1393b9af5852503812099a8ea0b"} Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.775460 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.778279 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerStarted","Data":"7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a"} Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.780600 4861 generic.go:334] "Generic (PLEG): container finished" podID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerID="306cea667c9ac54f5998af75f11d468a981c16a6907c351fdd8f41f0326518e9" exitCode=0 Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.780768 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lhzw" event={"ID":"bafa1b1c-66a3-42f6-8a14-4a272b2ac176","Type":"ContainerDied","Data":"306cea667c9ac54f5998af75f11d468a981c16a6907c351fdd8f41f0326518e9"} Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.784437 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-khrzg" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" Feb 19 13:12:53 crc kubenswrapper[4861]: E0219 13:12:53.784506 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j97fz" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" Feb 19 13:12:53 crc kubenswrapper[4861]: I0219 13:12:53.911249 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" podStartSLOduration=16.911199527 podStartE2EDuration="16.911199527s" podCreationTimestamp="2026-02-19 13:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:12:53.909136495 +0000 UTC m=+188.570239723" watchObservedRunningTime="2026-02-19 13:12:53.911199527 +0000 UTC m=+188.572302775" Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.787672 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lhzw" event={"ID":"bafa1b1c-66a3-42f6-8a14-4a272b2ac176","Type":"ContainerStarted","Data":"daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f"} Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.790773 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerID="7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a" exitCode=0 Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.790896 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerDied","Data":"7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a"} Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.790933 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerStarted","Data":"64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3"} Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.794964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qvw" event={"ID":"26c3ab3c-b007-48bf-9267-1be0df74a551","Type":"ContainerStarted","Data":"f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af"} Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.796699 4861 generic.go:334] "Generic (PLEG): container finished" podID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerID="c6b780148cd892dd5ce810b54b251139570de1393b9af5852503812099a8ea0b" exitCode=0 Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.796800 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerDied","Data":"c6b780148cd892dd5ce810b54b251139570de1393b9af5852503812099a8ea0b"} Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.808407 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2lhzw" podStartSLOduration=2.5273746580000003 podStartE2EDuration="33.808385267s" podCreationTimestamp="2026-02-19 13:12:21 +0000 UTC" firstStartedPulling="2026-02-19 13:12:23.280113979 +0000 UTC m=+157.941217207" lastFinishedPulling="2026-02-19 13:12:54.561124598 +0000 UTC m=+189.222227816" observedRunningTime="2026-02-19 13:12:54.807252983 +0000 UTC m=+189.468356221" watchObservedRunningTime="2026-02-19 13:12:54.808385267 +0000 UTC m=+189.469488485" Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.827924 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zv9zt" podStartSLOduration=2.7762230839999997 podStartE2EDuration="33.827907337s" podCreationTimestamp="2026-02-19 13:12:21 +0000 UTC" firstStartedPulling="2026-02-19 13:12:23.266313232 +0000 UTC m=+157.927416460" lastFinishedPulling="2026-02-19 13:12:54.317997485 +0000 UTC m=+188.979100713" observedRunningTime="2026-02-19 13:12:54.825054501 +0000 UTC m=+189.486157739" watchObservedRunningTime="2026-02-19 13:12:54.827907337 +0000 UTC m=+189.489010555" Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.839821 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-br4gr" Feb 19 13:12:54 crc kubenswrapper[4861]: I0219 13:12:54.846938 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d8qvw" podStartSLOduration=3.910735522 podStartE2EDuration="35.846915551s" podCreationTimestamp="2026-02-19 13:12:19 +0000 UTC" firstStartedPulling="2026-02-19 13:12:22.226970558 +0000 UTC m=+156.888073786" lastFinishedPulling="2026-02-19 13:12:54.163150567 +0000 UTC m=+188.824253815" observedRunningTime="2026-02-19 13:12:54.843797227 +0000 UTC m=+189.504900455" watchObservedRunningTime="2026-02-19 13:12:54.846915551 +0000 UTC m=+189.508018779" Feb 19 13:12:55 crc kubenswrapper[4861]: I0219 13:12:55.804007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerStarted","Data":"74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe"} Feb 19 13:12:55 crc kubenswrapper[4861]: I0219 13:12:55.830219 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5nqk" podStartSLOduration=4.765435123 podStartE2EDuration="37.830198352s" podCreationTimestamp="2026-02-19 13:12:18 +0000 UTC" firstStartedPulling="2026-02-19 13:12:22.150533309 +0000 UTC m=+156.811636537" lastFinishedPulling="2026-02-19 13:12:55.215296548 +0000 UTC m=+189.876399766" observedRunningTime="2026-02-19 13:12:55.82583007 +0000 UTC m=+190.486933298" watchObservedRunningTime="2026-02-19 13:12:55.830198352 +0000 UTC m=+190.491301600" Feb 19 13:12:56 crc kubenswrapper[4861]: I0219 13:12:56.992792 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58d679fb6c-sbvgv"] Feb 19 13:12:56 crc kubenswrapper[4861]: I0219 13:12:56.993306 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" podUID="0a3e6f5a-8278-42b6-987f-9a422d130d90" containerName="controller-manager" containerID="cri-o://ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11" gracePeriod=30 Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.101720 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq"] Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.102394 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" podUID="9852d40c-c1c2-4fee-91ae-fc3b521186e3" containerName="route-controller-manager" containerID="cri-o://4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b" gracePeriod=30 Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.217244 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.550610 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.621591 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.723906 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdf5f\" (UniqueName: \"kubernetes.io/projected/0a3e6f5a-8278-42b6-987f-9a422d130d90-kube-api-access-cdf5f\") pod \"0a3e6f5a-8278-42b6-987f-9a422d130d90\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.723978 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9852d40c-c1c2-4fee-91ae-fc3b521186e3-serving-cert\") pod \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724014 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-client-ca\") pod \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724044 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-config\") pod \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724077 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt8qt\" (UniqueName: \"kubernetes.io/projected/9852d40c-c1c2-4fee-91ae-fc3b521186e3-kube-api-access-dt8qt\") pod \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\" (UID: \"9852d40c-c1c2-4fee-91ae-fc3b521186e3\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724155 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-config\") pod \"0a3e6f5a-8278-42b6-987f-9a422d130d90\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724192 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-client-ca\") pod \"0a3e6f5a-8278-42b6-987f-9a422d130d90\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724213 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e6f5a-8278-42b6-987f-9a422d130d90-serving-cert\") pod \"0a3e6f5a-8278-42b6-987f-9a422d130d90\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.724250 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-proxy-ca-bundles\") pod \"0a3e6f5a-8278-42b6-987f-9a422d130d90\" (UID: \"0a3e6f5a-8278-42b6-987f-9a422d130d90\") " Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.725124 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "9852d40c-c1c2-4fee-91ae-fc3b521186e3" (UID: "9852d40c-c1c2-4fee-91ae-fc3b521186e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.725134 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-config" (OuterVolumeSpecName: "config") pod "9852d40c-c1c2-4fee-91ae-fc3b521186e3" (UID: "9852d40c-c1c2-4fee-91ae-fc3b521186e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.726059 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a3e6f5a-8278-42b6-987f-9a422d130d90" (UID: "0a3e6f5a-8278-42b6-987f-9a422d130d90"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.726080 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0a3e6f5a-8278-42b6-987f-9a422d130d90" (UID: "0a3e6f5a-8278-42b6-987f-9a422d130d90"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.726134 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-config" (OuterVolumeSpecName: "config") pod "0a3e6f5a-8278-42b6-987f-9a422d130d90" (UID: "0a3e6f5a-8278-42b6-987f-9a422d130d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.730915 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9852d40c-c1c2-4fee-91ae-fc3b521186e3-kube-api-access-dt8qt" (OuterVolumeSpecName: "kube-api-access-dt8qt") pod "9852d40c-c1c2-4fee-91ae-fc3b521186e3" (UID: "9852d40c-c1c2-4fee-91ae-fc3b521186e3"). InnerVolumeSpecName "kube-api-access-dt8qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.731070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9852d40c-c1c2-4fee-91ae-fc3b521186e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9852d40c-c1c2-4fee-91ae-fc3b521186e3" (UID: "9852d40c-c1c2-4fee-91ae-fc3b521186e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.731131 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3e6f5a-8278-42b6-987f-9a422d130d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a3e6f5a-8278-42b6-987f-9a422d130d90" (UID: "0a3e6f5a-8278-42b6-987f-9a422d130d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.731590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3e6f5a-8278-42b6-987f-9a422d130d90-kube-api-access-cdf5f" (OuterVolumeSpecName: "kube-api-access-cdf5f") pod "0a3e6f5a-8278-42b6-987f-9a422d130d90" (UID: "0a3e6f5a-8278-42b6-987f-9a422d130d90"). InnerVolumeSpecName "kube-api-access-cdf5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.816650 4861 generic.go:334] "Generic (PLEG): container finished" podID="9852d40c-c1c2-4fee-91ae-fc3b521186e3" containerID="4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b" exitCode=0 Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.816732 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" event={"ID":"9852d40c-c1c2-4fee-91ae-fc3b521186e3","Type":"ContainerDied","Data":"4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b"} Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.816742 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.817045 4861 scope.go:117] "RemoveContainer" containerID="4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.817022 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq" event={"ID":"9852d40c-c1c2-4fee-91ae-fc3b521186e3","Type":"ContainerDied","Data":"8d9254779d8c3b114455a617fae0091e7a81d3280d86158c6802de7159c50592"} Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.818965 4861 generic.go:334] "Generic (PLEG): container finished" podID="0a3e6f5a-8278-42b6-987f-9a422d130d90" containerID="ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11" exitCode=0 Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.819011 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" event={"ID":"0a3e6f5a-8278-42b6-987f-9a422d130d90","Type":"ContainerDied","Data":"ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11"} Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.819042 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" event={"ID":"0a3e6f5a-8278-42b6-987f-9a422d130d90","Type":"ContainerDied","Data":"d612d26da648037c3bc144183d5a4b6adcf43ac6fa64d017b415eb41646bf72e"} Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.819095 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58d679fb6c-sbvgv" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825287 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdf5f\" (UniqueName: \"kubernetes.io/projected/0a3e6f5a-8278-42b6-987f-9a422d130d90-kube-api-access-cdf5f\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825314 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9852d40c-c1c2-4fee-91ae-fc3b521186e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825324 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825333 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9852d40c-c1c2-4fee-91ae-fc3b521186e3-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825342 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt8qt\" (UniqueName: \"kubernetes.io/projected/9852d40c-c1c2-4fee-91ae-fc3b521186e3-kube-api-access-dt8qt\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825351 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825360 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825368 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e6f5a-8278-42b6-987f-9a422d130d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.825377 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a3e6f5a-8278-42b6-987f-9a422d130d90-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.848067 4861 scope.go:117] "RemoveContainer" containerID="4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b" Feb 19 13:12:57 crc kubenswrapper[4861]: E0219 13:12:57.848579 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b\": container with ID starting with 4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b not found: ID does not exist" containerID="4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.848638 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b"} err="failed to get container status \"4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b\": rpc error: code = NotFound desc = could not find container \"4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b\": container with ID starting with 4157ae7e6bedd69b959d73425042ba94fc4fcdfb034b26ed52063e67e31f3b5b not found: ID does not exist" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.848726 4861 scope.go:117] "RemoveContainer" containerID="ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.848887 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58d679fb6c-sbvgv"] Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.851964 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58d679fb6c-sbvgv"] Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.859325 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq"] Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.862271 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5444d75749-bt6dq"] Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.868356 4861 scope.go:117] "RemoveContainer" containerID="ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11" Feb 19 13:12:57 crc kubenswrapper[4861]: E0219 13:12:57.868791 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11\": container with ID starting with ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11 not found: ID does not exist" containerID="ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.868827 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11"} err="failed to get container status \"ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11\": rpc error: code = NotFound desc = could not find container \"ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11\": container with ID starting with ebd75984f18f3e9e475bf5a5699059f32c278b26e0066f8e2287265dd7829d11 not found: ID does not exist" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.983401 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a3e6f5a-8278-42b6-987f-9a422d130d90" path="/var/lib/kubelet/pods/0a3e6f5a-8278-42b6-987f-9a422d130d90/volumes" Feb 19 13:12:57 crc kubenswrapper[4861]: I0219 13:12:57.983932 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9852d40c-c1c2-4fee-91ae-fc3b521186e3" path="/var/lib/kubelet/pods/9852d40c-c1c2-4fee-91ae-fc3b521186e3/volumes" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.618498 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54d4bc985c-r2gk6"] Feb 19 13:12:58 crc kubenswrapper[4861]: E0219 13:12:58.619255 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9852d40c-c1c2-4fee-91ae-fc3b521186e3" containerName="route-controller-manager" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.619272 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9852d40c-c1c2-4fee-91ae-fc3b521186e3" containerName="route-controller-manager" Feb 19 13:12:58 crc kubenswrapper[4861]: E0219 13:12:58.619290 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3e6f5a-8278-42b6-987f-9a422d130d90" containerName="controller-manager" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.619299 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3e6f5a-8278-42b6-987f-9a422d130d90" containerName="controller-manager" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.619524 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9852d40c-c1c2-4fee-91ae-fc3b521186e3" containerName="route-controller-manager" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.619541 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3e6f5a-8278-42b6-987f-9a422d130d90" containerName="controller-manager" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.620669 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.627486 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.627779 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.628099 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.628487 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq"] Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.628805 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.629015 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.629224 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.629475 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.634466 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq"] Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.634872 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.636755 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.637148 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.637447 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.637679 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.638653 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.640015 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.642095 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54d4bc985c-r2gk6"] Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.735820 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-serving-cert\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.736323 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-config\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.736543 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqv8\" (UniqueName: \"kubernetes.io/projected/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-kube-api-access-5xqv8\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.736639 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-proxy-ca-bundles\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.736711 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-client-ca\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.736833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-client-ca\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.736956 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvxq\" (UniqueName: \"kubernetes.io/projected/e6cd5e45-1b9c-4c11-ae45-315de2805446-kube-api-access-bcvxq\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.737033 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-config\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.737111 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd5e45-1b9c-4c11-ae45-315de2805446-serving-cert\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838441 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-client-ca\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838529 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcvxq\" (UniqueName: \"kubernetes.io/projected/e6cd5e45-1b9c-4c11-ae45-315de2805446-kube-api-access-bcvxq\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838554 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-config\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838580 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd5e45-1b9c-4c11-ae45-315de2805446-serving-cert\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838648 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-serving-cert\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838664 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-config\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838683 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xqv8\" (UniqueName: \"kubernetes.io/projected/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-kube-api-access-5xqv8\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838704 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-proxy-ca-bundles\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.838720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-client-ca\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.840055 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-config\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.840068 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-config\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.840757 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-proxy-ca-bundles\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.841961 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-client-ca\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.845755 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-serving-cert\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.848457 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd5e45-1b9c-4c11-ae45-315de2805446-serving-cert\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.855004 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcvxq\" (UniqueName: \"kubernetes.io/projected/e6cd5e45-1b9c-4c11-ae45-315de2805446-kube-api-access-bcvxq\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.859989 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xqv8\" (UniqueName: \"kubernetes.io/projected/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-kube-api-access-5xqv8\") pod \"route-controller-manager-6d96d659d7-jf6vq\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.924182 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-client-ca\") pod \"controller-manager-54d4bc985c-r2gk6\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.956337 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:12:58 crc kubenswrapper[4861]: I0219 13:12:58.965449 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.191410 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54d4bc985c-r2gk6"] Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.240354 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq"] Feb 19 13:12:59 crc kubenswrapper[4861]: W0219 13:12:59.250743 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0b10e17_9de8_4648_8cc6_db7f48ebabd4.slice/crio-a9f8b5ff5ba87328a3031293277deddfcd587e4c44626f97448ad1d090b1a7a5 WatchSource:0}: Error finding container a9f8b5ff5ba87328a3031293277deddfcd587e4c44626f97448ad1d090b1a7a5: Status 404 returned error can't find the container with id a9f8b5ff5ba87328a3031293277deddfcd587e4c44626f97448ad1d090b1a7a5 Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.477789 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.478178 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.773882 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.775018 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.832812 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" event={"ID":"b0b10e17-9de8-4648-8cc6-db7f48ebabd4","Type":"ContainerStarted","Data":"0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6"} Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.832858 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" event={"ID":"b0b10e17-9de8-4648-8cc6-db7f48ebabd4","Type":"ContainerStarted","Data":"a9f8b5ff5ba87328a3031293277deddfcd587e4c44626f97448ad1d090b1a7a5"} Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.835853 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" event={"ID":"e6cd5e45-1b9c-4c11-ae45-315de2805446","Type":"ContainerStarted","Data":"d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9"} Feb 19 13:12:59 crc kubenswrapper[4861]: I0219 13:12:59.835874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" event={"ID":"e6cd5e45-1b9c-4c11-ae45-315de2805446","Type":"ContainerStarted","Data":"b11d48817d69d39c27ec16ad2a113c70dda4f1705d9c3ab2dc55312b26533dc9"} Feb 19 13:13:00 crc kubenswrapper[4861]: I0219 13:13:00.078360 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:13:00 crc kubenswrapper[4861]: I0219 13:13:00.080017 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:13:00 crc kubenswrapper[4861]: I0219 13:13:00.859787 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" podStartSLOduration=3.859773015 podStartE2EDuration="3.859773015s" podCreationTimestamp="2026-02-19 13:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:00.856475967 +0000 UTC m=+195.517579205" watchObservedRunningTime="2026-02-19 13:13:00.859773015 +0000 UTC m=+195.520876243" Feb 19 13:13:00 crc kubenswrapper[4861]: I0219 13:13:00.875262 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" podStartSLOduration=3.875246581 podStartE2EDuration="3.875246581s" podCreationTimestamp="2026-02-19 13:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:00.871355846 +0000 UTC m=+195.532459074" watchObservedRunningTime="2026-02-19 13:13:00.875246581 +0000 UTC m=+195.536349799" Feb 19 13:13:00 crc kubenswrapper[4861]: I0219 13:13:00.884625 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:13:00 crc kubenswrapper[4861]: I0219 13:13:00.888654 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.095282 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.095944 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.099079 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.099243 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.105856 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.273144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.273241 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.374129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.374177 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.374297 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.405059 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.424879 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.793352 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.793699 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.835236 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.845992 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 13:13:01 crc kubenswrapper[4861]: W0219 13:13:01.848353 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode0d0ca58_8fb8_451f_86d2_1e8e510aefe5.slice/crio-5619a3077ad677770b3bcaf9ab299380ce3b89a46f278d31e19c3a0828084939 WatchSource:0}: Error finding container 5619a3077ad677770b3bcaf9ab299380ce3b89a46f278d31e19c3a0828084939: Status 404 returned error can't find the container with id 5619a3077ad677770b3bcaf9ab299380ce3b89a46f278d31e19c3a0828084939 Feb 19 13:13:01 crc kubenswrapper[4861]: I0219 13:13:01.890972 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.010282 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8qvw"] Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.133928 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.133986 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.180549 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.852133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5","Type":"ContainerStarted","Data":"cd80ea217832bce62cd600ade352643ecd2e29c5c47cc2d1b6060dbbd606ee9f"} Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.852196 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5","Type":"ContainerStarted","Data":"5619a3077ad677770b3bcaf9ab299380ce3b89a46f278d31e19c3a0828084939"} Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.852751 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d8qvw" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="registry-server" containerID="cri-o://f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af" gracePeriod=2 Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.870371 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.8703542469999999 podStartE2EDuration="1.870354247s" podCreationTimestamp="2026-02-19 13:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:02.867716159 +0000 UTC m=+197.528819387" watchObservedRunningTime="2026-02-19 13:13:02.870354247 +0000 UTC m=+197.531457475" Feb 19 13:13:02 crc kubenswrapper[4861]: I0219 13:13:02.892920 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.372225 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6txts"] Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.494849 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.608447 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-catalog-content\") pod \"26c3ab3c-b007-48bf-9267-1be0df74a551\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.608498 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-utilities\") pod \"26c3ab3c-b007-48bf-9267-1be0df74a551\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.608627 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvb7\" (UniqueName: \"kubernetes.io/projected/26c3ab3c-b007-48bf-9267-1be0df74a551-kube-api-access-hlvb7\") pod \"26c3ab3c-b007-48bf-9267-1be0df74a551\" (UID: \"26c3ab3c-b007-48bf-9267-1be0df74a551\") " Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.609323 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-utilities" (OuterVolumeSpecName: "utilities") pod "26c3ab3c-b007-48bf-9267-1be0df74a551" (UID: "26c3ab3c-b007-48bf-9267-1be0df74a551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.624662 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c3ab3c-b007-48bf-9267-1be0df74a551-kube-api-access-hlvb7" (OuterVolumeSpecName: "kube-api-access-hlvb7") pod "26c3ab3c-b007-48bf-9267-1be0df74a551" (UID: "26c3ab3c-b007-48bf-9267-1be0df74a551"). InnerVolumeSpecName "kube-api-access-hlvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.710103 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlvb7\" (UniqueName: \"kubernetes.io/projected/26c3ab3c-b007-48bf-9267-1be0df74a551-kube-api-access-hlvb7\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.710467 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.711503 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26c3ab3c-b007-48bf-9267-1be0df74a551" (UID: "26c3ab3c-b007-48bf-9267-1be0df74a551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.811472 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c3ab3c-b007-48bf-9267-1be0df74a551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.834485 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.834547 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.857711 4861 generic.go:334] "Generic (PLEG): container finished" podID="e0d0ca58-8fb8-451f-86d2-1e8e510aefe5" containerID="cd80ea217832bce62cd600ade352643ecd2e29c5c47cc2d1b6060dbbd606ee9f" exitCode=0 Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.857783 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5","Type":"ContainerDied","Data":"cd80ea217832bce62cd600ade352643ecd2e29c5c47cc2d1b6060dbbd606ee9f"} Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.860179 4861 generic.go:334] "Generic (PLEG): container finished" podID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerID="f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af" exitCode=0 Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.860259 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qvw" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.860251 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qvw" event={"ID":"26c3ab3c-b007-48bf-9267-1be0df74a551","Type":"ContainerDied","Data":"f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af"} Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.860312 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qvw" event={"ID":"26c3ab3c-b007-48bf-9267-1be0df74a551","Type":"ContainerDied","Data":"3ffdb97756be56a0d453482a4f3bff1a7a029d6a52ef02d1e9834aa5f5300173"} Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.860337 4861 scope.go:117] "RemoveContainer" containerID="f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.877450 4861 scope.go:117] "RemoveContainer" containerID="f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.890300 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8qvw"] Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.890345 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d8qvw"] Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.906570 4861 scope.go:117] "RemoveContainer" containerID="c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.922637 4861 scope.go:117] "RemoveContainer" containerID="f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af" Feb 19 13:13:03 crc kubenswrapper[4861]: E0219 13:13:03.923104 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af\": container with ID starting with f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af not found: ID does not exist" containerID="f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.923153 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af"} err="failed to get container status \"f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af\": rpc error: code = NotFound desc = could not find container \"f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af\": container with ID starting with f20235f5b6af16f60727be69cadaf235b7e1f580de1a84945f94eeb4e258a7af not found: ID does not exist" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.923175 4861 scope.go:117] "RemoveContainer" containerID="f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9" Feb 19 13:13:03 crc kubenswrapper[4861]: E0219 13:13:03.923535 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9\": container with ID starting with f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9 not found: ID does not exist" containerID="f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.923555 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9"} err="failed to get container status \"f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9\": rpc error: code = NotFound desc = could not find container \"f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9\": container with ID starting with f860e6910ac408dbb0e154004cd18188e1afb8db5875907b7292cf4a1e663aa9 not found: ID does not exist" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.923574 4861 scope.go:117] "RemoveContainer" containerID="c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8" Feb 19 13:13:03 crc kubenswrapper[4861]: E0219 13:13:03.923815 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8\": container with ID starting with c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8 not found: ID does not exist" containerID="c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.923839 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8"} err="failed to get container status \"c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8\": rpc error: code = NotFound desc = could not find container \"c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8\": container with ID starting with c59e6ced916cc91a660cb50515889de6c2090955d4200f5c5a2d93339d55fec8 not found: ID does not exist" Feb 19 13:13:03 crc kubenswrapper[4861]: I0219 13:13:03.983992 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" path="/var/lib/kubelet/pods/26c3ab3c-b007-48bf-9267-1be0df74a551/volumes" Feb 19 13:13:04 crc kubenswrapper[4861]: I0219 13:13:04.198041 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv9zt"] Feb 19 13:13:04 crc kubenswrapper[4861]: I0219 13:13:04.867489 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zv9zt" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="registry-server" containerID="cri-o://64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3" gracePeriod=2 Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.218499 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.330027 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kube-api-access\") pod \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.330122 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kubelet-dir\") pod \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\" (UID: \"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5\") " Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.330205 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e0d0ca58-8fb8-451f-86d2-1e8e510aefe5" (UID: "e0d0ca58-8fb8-451f-86d2-1e8e510aefe5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.330390 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.336613 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e0d0ca58-8fb8-451f-86d2-1e8e510aefe5" (UID: "e0d0ca58-8fb8-451f-86d2-1e8e510aefe5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.336850 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.431297 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-catalog-content\") pod \"5db4ca60-2b1f-4d62-a217-890f7be1e863\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.431428 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-utilities\") pod \"5db4ca60-2b1f-4d62-a217-890f7be1e863\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.431472 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qst6\" (UniqueName: \"kubernetes.io/projected/5db4ca60-2b1f-4d62-a217-890f7be1e863-kube-api-access-2qst6\") pod \"5db4ca60-2b1f-4d62-a217-890f7be1e863\" (UID: \"5db4ca60-2b1f-4d62-a217-890f7be1e863\") " Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.431719 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0ca58-8fb8-451f-86d2-1e8e510aefe5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.432549 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-utilities" (OuterVolumeSpecName: "utilities") pod "5db4ca60-2b1f-4d62-a217-890f7be1e863" (UID: "5db4ca60-2b1f-4d62-a217-890f7be1e863"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.434384 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db4ca60-2b1f-4d62-a217-890f7be1e863-kube-api-access-2qst6" (OuterVolumeSpecName: "kube-api-access-2qst6") pod "5db4ca60-2b1f-4d62-a217-890f7be1e863" (UID: "5db4ca60-2b1f-4d62-a217-890f7be1e863"). InnerVolumeSpecName "kube-api-access-2qst6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.459033 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5db4ca60-2b1f-4d62-a217-890f7be1e863" (UID: "5db4ca60-2b1f-4d62-a217-890f7be1e863"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.532893 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.532939 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qst6\" (UniqueName: \"kubernetes.io/projected/5db4ca60-2b1f-4d62-a217-890f7be1e863-kube-api-access-2qst6\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.532956 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db4ca60-2b1f-4d62-a217-890f7be1e863-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.873853 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerID="64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3" exitCode=0 Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.873919 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zv9zt" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.873940 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerDied","Data":"64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3"} Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.873975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv9zt" event={"ID":"5db4ca60-2b1f-4d62-a217-890f7be1e863","Type":"ContainerDied","Data":"2456017c112249437718dd4f13bd8dc04385bdd1239ff75ee2f66af909c3ba24"} Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.874011 4861 scope.go:117] "RemoveContainer" containerID="64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.876774 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0d0ca58-8fb8-451f-86d2-1e8e510aefe5","Type":"ContainerDied","Data":"5619a3077ad677770b3bcaf9ab299380ce3b89a46f278d31e19c3a0828084939"} Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.876801 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5619a3077ad677770b3bcaf9ab299380ce3b89a46f278d31e19c3a0828084939" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.876838 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.903707 4861 scope.go:117] "RemoveContainer" containerID="7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.912774 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv9zt"] Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.916767 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv9zt"] Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.931515 4861 scope.go:117] "RemoveContainer" containerID="74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.947325 4861 scope.go:117] "RemoveContainer" containerID="64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3" Feb 19 13:13:05 crc kubenswrapper[4861]: E0219 13:13:05.947694 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3\": container with ID starting with 64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3 not found: ID does not exist" containerID="64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.947755 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3"} err="failed to get container status \"64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3\": rpc error: code = NotFound desc = could not find container \"64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3\": container with ID starting with 64c3f0b8e3123574939b93b7229727c3f5ee432de1dfe0062a309b6f96e44dc3 not found: ID does not exist" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.947777 4861 scope.go:117] "RemoveContainer" containerID="7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a" Feb 19 13:13:05 crc kubenswrapper[4861]: E0219 13:13:05.948150 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a\": container with ID starting with 7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a not found: ID does not exist" containerID="7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.948190 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a"} err="failed to get container status \"7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a\": rpc error: code = NotFound desc = could not find container \"7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a\": container with ID starting with 7f607e2c1840e713202bb23880e5cc2d1d09b1dca61326e27ed221189ffc512a not found: ID does not exist" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.948217 4861 scope.go:117] "RemoveContainer" containerID="74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0" Feb 19 13:13:05 crc kubenswrapper[4861]: E0219 13:13:05.948468 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0\": container with ID starting with 74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0 not found: ID does not exist" containerID="74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.948494 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0"} err="failed to get container status \"74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0\": rpc error: code = NotFound desc = could not find container \"74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0\": container with ID starting with 74efef3dc74f7c26e83083cf08d8e29581cd14cd07aed16f42bc0c2b04d926e0 not found: ID does not exist" Feb 19 13:13:05 crc kubenswrapper[4861]: I0219 13:13:05.988120 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" path="/var/lib/kubelet/pods/5db4ca60-2b1f-4d62-a217-890f7be1e863/volumes" Feb 19 13:13:06 crc kubenswrapper[4861]: I0219 13:13:06.884297 4861 generic.go:334] "Generic (PLEG): container finished" podID="617be892-2391-43d5-94d0-c0600d0c66a0" containerID="0ffb2104b8170ce2a2c0822f15082cc7a61e07545e70dfc122963df9da49cdcf" exitCode=0 Feb 19 13:13:06 crc kubenswrapper[4861]: I0219 13:13:06.884351 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcqf7" event={"ID":"617be892-2391-43d5-94d0-c0600d0c66a0","Type":"ContainerDied","Data":"0ffb2104b8170ce2a2c0822f15082cc7a61e07545e70dfc122963df9da49cdcf"} Feb 19 13:13:06 crc kubenswrapper[4861]: I0219 13:13:06.887107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerStarted","Data":"8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf"} Feb 19 13:13:06 crc kubenswrapper[4861]: I0219 13:13:06.892148 4861 generic.go:334] "Generic (PLEG): container finished" podID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerID="68309d33a413c702fcb94934567359ad09b83cdf7d5a185cc0caf1370d403a92" exitCode=0 Feb 19 13:13:06 crc kubenswrapper[4861]: I0219 13:13:06.892180 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khrzg" event={"ID":"0eb10abc-c209-4a6b-8fc8-39973ed75fd6","Type":"ContainerDied","Data":"68309d33a413c702fcb94934567359ad09b83cdf7d5a185cc0caf1370d403a92"} Feb 19 13:13:07 crc kubenswrapper[4861]: I0219 13:13:07.898216 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcqf7" event={"ID":"617be892-2391-43d5-94d0-c0600d0c66a0","Type":"ContainerStarted","Data":"8e84fa1060cb842d24672baa58b080884bd9f1a88e4ef83cddaf0df561d3812d"} Feb 19 13:13:07 crc kubenswrapper[4861]: I0219 13:13:07.900562 4861 generic.go:334] "Generic (PLEG): container finished" podID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerID="8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf" exitCode=0 Feb 19 13:13:07 crc kubenswrapper[4861]: I0219 13:13:07.900599 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerDied","Data":"8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf"} Feb 19 13:13:07 crc kubenswrapper[4861]: I0219 13:13:07.902279 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khrzg" event={"ID":"0eb10abc-c209-4a6b-8fc8-39973ed75fd6","Type":"ContainerStarted","Data":"22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0"} Feb 19 13:13:07 crc kubenswrapper[4861]: I0219 13:13:07.936845 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kcqf7" podStartSLOduration=3.118661266 podStartE2EDuration="45.936824589s" podCreationTimestamp="2026-02-19 13:12:22 +0000 UTC" firstStartedPulling="2026-02-19 13:12:24.445215422 +0000 UTC m=+159.106318650" lastFinishedPulling="2026-02-19 13:13:07.263378745 +0000 UTC m=+201.924481973" observedRunningTime="2026-02-19 13:13:07.919829387 +0000 UTC m=+202.580932615" watchObservedRunningTime="2026-02-19 13:13:07.936824589 +0000 UTC m=+202.597927817" Feb 19 13:13:07 crc kubenswrapper[4861]: I0219 13:13:07.999882 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-khrzg" podStartSLOduration=3.866991408 podStartE2EDuration="48.999867439s" podCreationTimestamp="2026-02-19 13:12:19 +0000 UTC" firstStartedPulling="2026-02-19 13:12:22.174566945 +0000 UTC m=+156.835670173" lastFinishedPulling="2026-02-19 13:13:07.307442976 +0000 UTC m=+201.968546204" observedRunningTime="2026-02-19 13:13:07.961195908 +0000 UTC m=+202.622299156" watchObservedRunningTime="2026-02-19 13:13:07.999867439 +0000 UTC m=+202.660970667" Feb 19 13:13:08 crc kubenswrapper[4861]: I0219 13:13:08.957380 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:13:08 crc kubenswrapper[4861]: I0219 13:13:08.961862 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:13:08 crc kubenswrapper[4861]: I0219 13:13:08.965791 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:13:08 crc kubenswrapper[4861]: I0219 13:13:08.971964 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307140 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307627 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d0ca58-8fb8-451f-86d2-1e8e510aefe5" containerName="pruner" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307650 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d0ca58-8fb8-451f-86d2-1e8e510aefe5" containerName="pruner" Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307663 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="extract-utilities" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307673 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="extract-utilities" Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307705 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="registry-server" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307717 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="registry-server" Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307730 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="extract-content" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307739 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="extract-content" Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307761 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="extract-content" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307770 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="extract-content" Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307815 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="extract-utilities" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307825 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="extract-utilities" Feb 19 13:13:09 crc kubenswrapper[4861]: E0219 13:13:09.307841 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="registry-server" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.307850 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="registry-server" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.308122 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d0ca58-8fb8-451f-86d2-1e8e510aefe5" containerName="pruner" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.308147 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c3ab3c-b007-48bf-9267-1be0df74a551" containerName="registry-server" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.308158 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db4ca60-2b1f-4d62-a217-890f7be1e863" containerName="registry-server" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.309076 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.311783 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.312034 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.320556 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.378833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.378903 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e0ea74-d593-456c-9d4a-b5487351acaa-kube-api-access\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.378968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-var-lock\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.480141 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.480498 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e0ea74-d593-456c-9d4a-b5487351acaa-kube-api-access\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.480537 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-var-lock\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.480594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-var-lock\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.480348 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.507489 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e0ea74-d593-456c-9d4a-b5487351acaa-kube-api-access\") pod \"installer-9-crc\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.635891 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.919623 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerStarted","Data":"ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4"} Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.922360 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerStarted","Data":"aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8"} Feb 19 13:13:09 crc kubenswrapper[4861]: I0219 13:13:09.939851 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-plx4l" podStartSLOduration=3.358651626 podStartE2EDuration="47.939831268s" podCreationTimestamp="2026-02-19 13:12:22 +0000 UTC" firstStartedPulling="2026-02-19 13:12:24.341554151 +0000 UTC m=+159.002657379" lastFinishedPulling="2026-02-19 13:13:08.922733793 +0000 UTC m=+203.583837021" observedRunningTime="2026-02-19 13:13:09.936759568 +0000 UTC m=+204.597862796" watchObservedRunningTime="2026-02-19 13:13:09.939831268 +0000 UTC m=+204.600934496" Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.044319 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.044384 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.092366 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.109327 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.927854 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86e0ea74-d593-456c-9d4a-b5487351acaa","Type":"ContainerStarted","Data":"2b35708c900bf83cb490807a1d2413d8ff551369b85043b64f1d13cc42f66ec0"} Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.928123 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86e0ea74-d593-456c-9d4a-b5487351acaa","Type":"ContainerStarted","Data":"92bb2708ee5d175e3a2b836dff07a9ed4f0c1e3305dc4dc8d022e900fde844e1"} Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.930379 4861 generic.go:334] "Generic (PLEG): container finished" podID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerID="aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8" exitCode=0 Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.930447 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerDied","Data":"aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8"} Feb 19 13:13:10 crc kubenswrapper[4861]: I0219 13:13:10.947286 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.947272807 podStartE2EDuration="1.947272807s" podCreationTimestamp="2026-02-19 13:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:10.944119624 +0000 UTC m=+205.605222852" watchObservedRunningTime="2026-02-19 13:13:10.947272807 +0000 UTC m=+205.608376035" Feb 19 13:13:12 crc kubenswrapper[4861]: I0219 13:13:12.745648 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:13:12 crc kubenswrapper[4861]: I0219 13:13:12.745710 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:13:13 crc kubenswrapper[4861]: I0219 13:13:13.113955 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:13:13 crc kubenswrapper[4861]: I0219 13:13:13.114021 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:13:13 crc kubenswrapper[4861]: I0219 13:13:13.788473 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kcqf7" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="registry-server" probeResult="failure" output=< Feb 19 13:13:13 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 13:13:13 crc kubenswrapper[4861]: > Feb 19 13:13:14 crc kubenswrapper[4861]: I0219 13:13:14.162726 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-plx4l" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="registry-server" probeResult="failure" output=< Feb 19 13:13:14 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 13:13:14 crc kubenswrapper[4861]: > Feb 19 13:13:14 crc kubenswrapper[4861]: I0219 13:13:14.951147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerStarted","Data":"8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905"} Feb 19 13:13:14 crc kubenswrapper[4861]: I0219 13:13:14.969843 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j97fz" podStartSLOduration=4.544453503 podStartE2EDuration="55.969826213s" podCreationTimestamp="2026-02-19 13:12:19 +0000 UTC" firstStartedPulling="2026-02-19 13:12:22.150526659 +0000 UTC m=+156.811629887" lastFinishedPulling="2026-02-19 13:13:13.575899369 +0000 UTC m=+208.237002597" observedRunningTime="2026-02-19 13:13:14.967393992 +0000 UTC m=+209.628497250" watchObservedRunningTime="2026-02-19 13:13:14.969826213 +0000 UTC m=+209.630929431" Feb 19 13:13:16 crc kubenswrapper[4861]: I0219 13:13:16.996773 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54d4bc985c-r2gk6"] Feb 19 13:13:16 crc kubenswrapper[4861]: I0219 13:13:16.997101 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" podUID="e6cd5e45-1b9c-4c11-ae45-315de2805446" containerName="controller-manager" containerID="cri-o://d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9" gracePeriod=30 Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.028381 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq"] Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.028818 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" podUID="b0b10e17-9de8-4648-8cc6-db7f48ebabd4" containerName="route-controller-manager" containerID="cri-o://0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6" gracePeriod=30 Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.566273 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.603978 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.628201 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-serving-cert\") pod \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.628252 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcvxq\" (UniqueName: \"kubernetes.io/projected/e6cd5e45-1b9c-4c11-ae45-315de2805446-kube-api-access-bcvxq\") pod \"e6cd5e45-1b9c-4c11-ae45-315de2805446\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.628351 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-proxy-ca-bundles\") pod \"e6cd5e45-1b9c-4c11-ae45-315de2805446\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.629874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e6cd5e45-1b9c-4c11-ae45-315de2805446" (UID: "e6cd5e45-1b9c-4c11-ae45-315de2805446"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.630406 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xqv8\" (UniqueName: \"kubernetes.io/projected/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-kube-api-access-5xqv8\") pod \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.630479 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd5e45-1b9c-4c11-ae45-315de2805446-serving-cert\") pod \"e6cd5e45-1b9c-4c11-ae45-315de2805446\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.630567 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-config\") pod \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.630617 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-config\") pod \"e6cd5e45-1b9c-4c11-ae45-315de2805446\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.630646 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-client-ca\") pod \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\" (UID: \"b0b10e17-9de8-4648-8cc6-db7f48ebabd4\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.630815 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-client-ca\") pod \"e6cd5e45-1b9c-4c11-ae45-315de2805446\" (UID: \"e6cd5e45-1b9c-4c11-ae45-315de2805446\") " Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.631329 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.631922 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6cd5e45-1b9c-4c11-ae45-315de2805446" (UID: "e6cd5e45-1b9c-4c11-ae45-315de2805446"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.632665 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-config" (OuterVolumeSpecName: "config") pod "b0b10e17-9de8-4648-8cc6-db7f48ebabd4" (UID: "b0b10e17-9de8-4648-8cc6-db7f48ebabd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.633600 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0b10e17-9de8-4648-8cc6-db7f48ebabd4" (UID: "b0b10e17-9de8-4648-8cc6-db7f48ebabd4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.634198 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-config" (OuterVolumeSpecName: "config") pod "e6cd5e45-1b9c-4c11-ae45-315de2805446" (UID: "e6cd5e45-1b9c-4c11-ae45-315de2805446"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.634761 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0b10e17-9de8-4648-8cc6-db7f48ebabd4" (UID: "b0b10e17-9de8-4648-8cc6-db7f48ebabd4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.634901 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6cd5e45-1b9c-4c11-ae45-315de2805446-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6cd5e45-1b9c-4c11-ae45-315de2805446" (UID: "e6cd5e45-1b9c-4c11-ae45-315de2805446"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.635332 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-kube-api-access-5xqv8" (OuterVolumeSpecName: "kube-api-access-5xqv8") pod "b0b10e17-9de8-4648-8cc6-db7f48ebabd4" (UID: "b0b10e17-9de8-4648-8cc6-db7f48ebabd4"). InnerVolumeSpecName "kube-api-access-5xqv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.636304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6cd5e45-1b9c-4c11-ae45-315de2805446-kube-api-access-bcvxq" (OuterVolumeSpecName: "kube-api-access-bcvxq") pod "e6cd5e45-1b9c-4c11-ae45-315de2805446" (UID: "e6cd5e45-1b9c-4c11-ae45-315de2805446"). InnerVolumeSpecName "kube-api-access-bcvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732476 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732771 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732807 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcvxq\" (UniqueName: \"kubernetes.io/projected/e6cd5e45-1b9c-4c11-ae45-315de2805446-kube-api-access-bcvxq\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732820 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xqv8\" (UniqueName: \"kubernetes.io/projected/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-kube-api-access-5xqv8\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732829 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd5e45-1b9c-4c11-ae45-315de2805446-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732837 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732845 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd5e45-1b9c-4c11-ae45-315de2805446-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.732853 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0b10e17-9de8-4648-8cc6-db7f48ebabd4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.970464 4861 generic.go:334] "Generic (PLEG): container finished" podID="e6cd5e45-1b9c-4c11-ae45-315de2805446" containerID="d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9" exitCode=0 Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.970536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" event={"ID":"e6cd5e45-1b9c-4c11-ae45-315de2805446","Type":"ContainerDied","Data":"d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9"} Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.970570 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.970616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d4bc985c-r2gk6" event={"ID":"e6cd5e45-1b9c-4c11-ae45-315de2805446","Type":"ContainerDied","Data":"b11d48817d69d39c27ec16ad2a113c70dda4f1705d9c3ab2dc55312b26533dc9"} Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.970873 4861 scope.go:117] "RemoveContainer" containerID="d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9" Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.973480 4861 generic.go:334] "Generic (PLEG): container finished" podID="b0b10e17-9de8-4648-8cc6-db7f48ebabd4" containerID="0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6" exitCode=0 Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.973527 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" event={"ID":"b0b10e17-9de8-4648-8cc6-db7f48ebabd4","Type":"ContainerDied","Data":"0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6"} Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.973558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" event={"ID":"b0b10e17-9de8-4648-8cc6-db7f48ebabd4","Type":"ContainerDied","Data":"a9f8b5ff5ba87328a3031293277deddfcd587e4c44626f97448ad1d090b1a7a5"} Feb 19 13:13:17 crc kubenswrapper[4861]: I0219 13:13:17.973597 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:17.999725 4861 scope.go:117] "RemoveContainer" containerID="d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9" Feb 19 13:13:18 crc kubenswrapper[4861]: E0219 13:13:18.000275 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9\": container with ID starting with d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9 not found: ID does not exist" containerID="d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.000371 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9"} err="failed to get container status \"d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9\": rpc error: code = NotFound desc = could not find container \"d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9\": container with ID starting with d09edfbd54f6a56d8e56d587e96b223848b17799bfa8ce5075cef4bdbb7dd8e9 not found: ID does not exist" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.000401 4861 scope.go:117] "RemoveContainer" containerID="0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.007982 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54d4bc985c-r2gk6"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.015481 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54d4bc985c-r2gk6"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.017572 4861 scope.go:117] "RemoveContainer" containerID="0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6" Feb 19 13:13:18 crc kubenswrapper[4861]: E0219 13:13:18.018098 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6\": container with ID starting with 0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6 not found: ID does not exist" containerID="0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.018158 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6"} err="failed to get container status \"0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6\": rpc error: code = NotFound desc = could not find container \"0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6\": container with ID starting with 0abe651ab9ecd7379c51dca2f34219a3eee3fdcaa4ee9d9a613ed81a144018f6 not found: ID does not exist" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.022338 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.025276 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d96d659d7-jf6vq"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.631824 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd"] Feb 19 13:13:18 crc kubenswrapper[4861]: E0219 13:13:18.632650 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b10e17-9de8-4648-8cc6-db7f48ebabd4" containerName="route-controller-manager" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.632688 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b10e17-9de8-4648-8cc6-db7f48ebabd4" containerName="route-controller-manager" Feb 19 13:13:18 crc kubenswrapper[4861]: E0219 13:13:18.632716 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6cd5e45-1b9c-4c11-ae45-315de2805446" containerName="controller-manager" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.632729 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6cd5e45-1b9c-4c11-ae45-315de2805446" containerName="controller-manager" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.632920 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6cd5e45-1b9c-4c11-ae45-315de2805446" containerName="controller-manager" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.632953 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b10e17-9de8-4648-8cc6-db7f48ebabd4" containerName="route-controller-manager" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.633550 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.635450 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.636417 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.640001 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.641227 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.642351 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643015 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-proxy-ca-bundles\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643070 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a05c603c-8d91-4293-a4fe-39a0f0da7306-serving-cert\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643121 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfql\" (UniqueName: \"kubernetes.io/projected/a05c603c-8d91-4293-a4fe-39a0f0da7306-kube-api-access-jmfql\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643231 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-config\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643273 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-config\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643321 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n47d\" (UniqueName: \"kubernetes.io/projected/e9c1b134-7177-43e4-9f8a-d77e828d638b-kube-api-access-4n47d\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643354 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c1b134-7177-43e4-9f8a-d77e828d638b-serving-cert\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643410 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-client-ca\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643468 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-client-ca\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643666 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.643881 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.644135 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.644242 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.646118 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.646259 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.646549 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.649777 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.658390 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd"] Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.663921 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.664452 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.681746 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.743799 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n47d\" (UniqueName: \"kubernetes.io/projected/e9c1b134-7177-43e4-9f8a-d77e828d638b-kube-api-access-4n47d\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.743843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c1b134-7177-43e4-9f8a-d77e828d638b-serving-cert\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.743896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-client-ca\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.743910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-client-ca\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.744514 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-proxy-ca-bundles\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.744586 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a05c603c-8d91-4293-a4fe-39a0f0da7306-serving-cert\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.744655 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfql\" (UniqueName: \"kubernetes.io/projected/a05c603c-8d91-4293-a4fe-39a0f0da7306-kube-api-access-jmfql\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.744795 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-config\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.744853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-config\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.744987 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-client-ca\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.745200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-client-ca\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.746042 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-config\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.746117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-config\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.746416 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-proxy-ca-bundles\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.749814 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c1b134-7177-43e4-9f8a-d77e828d638b-serving-cert\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.749840 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a05c603c-8d91-4293-a4fe-39a0f0da7306-serving-cert\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.762818 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n47d\" (UniqueName: \"kubernetes.io/projected/e9c1b134-7177-43e4-9f8a-d77e828d638b-kube-api-access-4n47d\") pod \"controller-manager-69b4d5bcb5-hn4vd\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.763080 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfql\" (UniqueName: \"kubernetes.io/projected/a05c603c-8d91-4293-a4fe-39a0f0da7306-kube-api-access-jmfql\") pod \"route-controller-manager-599767f8f5-vplgj\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.968338 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:18 crc kubenswrapper[4861]: I0219 13:13:18.977549 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.381017 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd"] Feb 19 13:13:19 crc kubenswrapper[4861]: W0219 13:13:19.389842 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9c1b134_7177_43e4_9f8a_d77e828d638b.slice/crio-65fee61b060e5a499ec36cbc12fe5c232e4cf8c049d16a71ff1101297a98d2c1 WatchSource:0}: Error finding container 65fee61b060e5a499ec36cbc12fe5c232e4cf8c049d16a71ff1101297a98d2c1: Status 404 returned error can't find the container with id 65fee61b060e5a499ec36cbc12fe5c232e4cf8c049d16a71ff1101297a98d2c1 Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.433271 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj"] Feb 19 13:13:19 crc kubenswrapper[4861]: W0219 13:13:19.457146 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda05c603c_8d91_4293_a4fe_39a0f0da7306.slice/crio-274c86e76dbb7b7878e488519f5078ba899c34d55909584650fb7064968c7190 WatchSource:0}: Error finding container 274c86e76dbb7b7878e488519f5078ba899c34d55909584650fb7064968c7190: Status 404 returned error can't find the container with id 274c86e76dbb7b7878e488519f5078ba899c34d55909584650fb7064968c7190 Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.983576 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b10e17-9de8-4648-8cc6-db7f48ebabd4" path="/var/lib/kubelet/pods/b0b10e17-9de8-4648-8cc6-db7f48ebabd4/volumes" Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.984831 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6cd5e45-1b9c-4c11-ae45-315de2805446" path="/var/lib/kubelet/pods/e6cd5e45-1b9c-4c11-ae45-315de2805446/volumes" Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.997853 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" event={"ID":"e9c1b134-7177-43e4-9f8a-d77e828d638b","Type":"ContainerStarted","Data":"bb14e5256bfabec3c86590a25457f1c1f8909b213da55924abf5c2f704b1e3c5"} Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.997909 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" event={"ID":"e9c1b134-7177-43e4-9f8a-d77e828d638b","Type":"ContainerStarted","Data":"65fee61b060e5a499ec36cbc12fe5c232e4cf8c049d16a71ff1101297a98d2c1"} Feb 19 13:13:19 crc kubenswrapper[4861]: I0219 13:13:19.998066 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.000041 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" event={"ID":"a05c603c-8d91-4293-a4fe-39a0f0da7306","Type":"ContainerStarted","Data":"fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c"} Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.000084 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" event={"ID":"a05c603c-8d91-4293-a4fe-39a0f0da7306","Type":"ContainerStarted","Data":"274c86e76dbb7b7878e488519f5078ba899c34d55909584650fb7064968c7190"} Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.000276 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.002974 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.018601 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" podStartSLOduration=3.018584223 podStartE2EDuration="3.018584223s" podCreationTimestamp="2026-02-19 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:20.018179391 +0000 UTC m=+214.679282639" watchObservedRunningTime="2026-02-19 13:13:20.018584223 +0000 UTC m=+214.679687451" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.060408 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" podStartSLOduration=3.060389077 podStartE2EDuration="3.060389077s" podCreationTimestamp="2026-02-19 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:20.060013946 +0000 UTC m=+214.721117184" watchObservedRunningTime="2026-02-19 13:13:20.060389077 +0000 UTC m=+214.721492305" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.099035 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.152860 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.153196 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.183836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:20 crc kubenswrapper[4861]: I0219 13:13:20.198007 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:13:21 crc kubenswrapper[4861]: I0219 13:13:21.045581 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:13:21 crc kubenswrapper[4861]: I0219 13:13:21.616008 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j97fz"] Feb 19 13:13:22 crc kubenswrapper[4861]: I0219 13:13:22.803500 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:13:22 crc kubenswrapper[4861]: I0219 13:13:22.846318 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.016507 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j97fz" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="registry-server" containerID="cri-o://8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905" gracePeriod=2 Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.185805 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.243189 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.441972 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.613786 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-catalog-content\") pod \"da26b508-7a00-4494-afc8-3da8b16eeaa7\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.613916 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttspt\" (UniqueName: \"kubernetes.io/projected/da26b508-7a00-4494-afc8-3da8b16eeaa7-kube-api-access-ttspt\") pod \"da26b508-7a00-4494-afc8-3da8b16eeaa7\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.613962 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-utilities\") pod \"da26b508-7a00-4494-afc8-3da8b16eeaa7\" (UID: \"da26b508-7a00-4494-afc8-3da8b16eeaa7\") " Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.615131 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-utilities" (OuterVolumeSpecName: "utilities") pod "da26b508-7a00-4494-afc8-3da8b16eeaa7" (UID: "da26b508-7a00-4494-afc8-3da8b16eeaa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.626633 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da26b508-7a00-4494-afc8-3da8b16eeaa7-kube-api-access-ttspt" (OuterVolumeSpecName: "kube-api-access-ttspt") pod "da26b508-7a00-4494-afc8-3da8b16eeaa7" (UID: "da26b508-7a00-4494-afc8-3da8b16eeaa7"). InnerVolumeSpecName "kube-api-access-ttspt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.665221 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da26b508-7a00-4494-afc8-3da8b16eeaa7" (UID: "da26b508-7a00-4494-afc8-3da8b16eeaa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.716024 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.716074 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da26b508-7a00-4494-afc8-3da8b16eeaa7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:23 crc kubenswrapper[4861]: I0219 13:13:23.716088 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttspt\" (UniqueName: \"kubernetes.io/projected/da26b508-7a00-4494-afc8-3da8b16eeaa7-kube-api-access-ttspt\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.024403 4861 generic.go:334] "Generic (PLEG): container finished" podID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerID="8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905" exitCode=0 Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.024491 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j97fz" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.024534 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerDied","Data":"8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905"} Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.024591 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j97fz" event={"ID":"da26b508-7a00-4494-afc8-3da8b16eeaa7","Type":"ContainerDied","Data":"a292c9ed676c1c0dfa550bf1702507528b4378b85602cddeb76598fd10f56133"} Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.024615 4861 scope.go:117] "RemoveContainer" containerID="8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.039710 4861 scope.go:117] "RemoveContainer" containerID="aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.046157 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j97fz"] Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.053598 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j97fz"] Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.070484 4861 scope.go:117] "RemoveContainer" containerID="e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.085742 4861 scope.go:117] "RemoveContainer" containerID="8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905" Feb 19 13:13:24 crc kubenswrapper[4861]: E0219 13:13:24.086181 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905\": container with ID starting with 8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905 not found: ID does not exist" containerID="8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.086227 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905"} err="failed to get container status \"8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905\": rpc error: code = NotFound desc = could not find container \"8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905\": container with ID starting with 8ab331565c7702de59851ad150a711f871b6f6c51dee8637ea12b9917b970905 not found: ID does not exist" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.086259 4861 scope.go:117] "RemoveContainer" containerID="aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8" Feb 19 13:13:24 crc kubenswrapper[4861]: E0219 13:13:24.087181 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8\": container with ID starting with aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8 not found: ID does not exist" containerID="aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.087251 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8"} err="failed to get container status \"aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8\": rpc error: code = NotFound desc = could not find container \"aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8\": container with ID starting with aa3eae0708a30026b1354065ccd2b78584b6063fcebc35cced07f80110d17ae8 not found: ID does not exist" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.087286 4861 scope.go:117] "RemoveContainer" containerID="e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7" Feb 19 13:13:24 crc kubenswrapper[4861]: E0219 13:13:24.087700 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7\": container with ID starting with e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7 not found: ID does not exist" containerID="e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7" Feb 19 13:13:24 crc kubenswrapper[4861]: I0219 13:13:24.087746 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7"} err="failed to get container status \"e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7\": rpc error: code = NotFound desc = could not find container \"e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7\": container with ID starting with e748939e43dfee1d91090d0517531985e03ec76c33f44c960b2a5ee9de7f2cd7 not found: ID does not exist" Feb 19 13:13:25 crc kubenswrapper[4861]: I0219 13:13:25.989376 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" path="/var/lib/kubelet/pods/da26b508-7a00-4494-afc8-3da8b16eeaa7/volumes" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.018841 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-plx4l"] Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.019098 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-plx4l" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="registry-server" containerID="cri-o://ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4" gracePeriod=2 Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.475915 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.660887 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-utilities\") pod \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.661284 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmmql\" (UniqueName: \"kubernetes.io/projected/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-kube-api-access-mmmql\") pod \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.661378 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-catalog-content\") pod \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\" (UID: \"685e7fb8-e8da-4f5a-87c5-424d1e12a6be\") " Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.662457 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-utilities" (OuterVolumeSpecName: "utilities") pod "685e7fb8-e8da-4f5a-87c5-424d1e12a6be" (UID: "685e7fb8-e8da-4f5a-87c5-424d1e12a6be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.666566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-kube-api-access-mmmql" (OuterVolumeSpecName: "kube-api-access-mmmql") pod "685e7fb8-e8da-4f5a-87c5-424d1e12a6be" (UID: "685e7fb8-e8da-4f5a-87c5-424d1e12a6be"). InnerVolumeSpecName "kube-api-access-mmmql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.762531 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmmql\" (UniqueName: \"kubernetes.io/projected/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-kube-api-access-mmmql\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.762580 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.774937 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "685e7fb8-e8da-4f5a-87c5-424d1e12a6be" (UID: "685e7fb8-e8da-4f5a-87c5-424d1e12a6be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:26 crc kubenswrapper[4861]: I0219 13:13:26.863408 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685e7fb8-e8da-4f5a-87c5-424d1e12a6be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.048440 4861 generic.go:334] "Generic (PLEG): container finished" podID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerID="ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4" exitCode=0 Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.048496 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerDied","Data":"ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4"} Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.048508 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plx4l" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.048530 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plx4l" event={"ID":"685e7fb8-e8da-4f5a-87c5-424d1e12a6be","Type":"ContainerDied","Data":"f467fe7c8030d2e6118c230ac89cc9c9d8ef114e609b49c6f8f793d21de04fb5"} Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.048553 4861 scope.go:117] "RemoveContainer" containerID="ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.065485 4861 scope.go:117] "RemoveContainer" containerID="8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.077858 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-plx4l"] Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.082397 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-plx4l"] Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.108534 4861 scope.go:117] "RemoveContainer" containerID="a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.131130 4861 scope.go:117] "RemoveContainer" containerID="ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4" Feb 19 13:13:27 crc kubenswrapper[4861]: E0219 13:13:27.131654 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4\": container with ID starting with ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4 not found: ID does not exist" containerID="ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.131712 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4"} err="failed to get container status \"ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4\": rpc error: code = NotFound desc = could not find container \"ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4\": container with ID starting with ec9affbdf9ace6358a39022f4ff805f8f05516d1348f1a70a625f221461ea5c4 not found: ID does not exist" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.131756 4861 scope.go:117] "RemoveContainer" containerID="8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf" Feb 19 13:13:27 crc kubenswrapper[4861]: E0219 13:13:27.132067 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf\": container with ID starting with 8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf not found: ID does not exist" containerID="8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.132125 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf"} err="failed to get container status \"8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf\": rpc error: code = NotFound desc = could not find container \"8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf\": container with ID starting with 8370c5c4ad77c3e18295756bbd820d7c0f8f43d16ae111b133c85fb73d5eb8bf not found: ID does not exist" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.132165 4861 scope.go:117] "RemoveContainer" containerID="a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5" Feb 19 13:13:27 crc kubenswrapper[4861]: E0219 13:13:27.132676 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5\": container with ID starting with a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5 not found: ID does not exist" containerID="a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.132707 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5"} err="failed to get container status \"a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5\": rpc error: code = NotFound desc = could not find container \"a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5\": container with ID starting with a3a8e00596a3e550d25ef8b63d680aa05e278240c7020463f663a09d37ccdda5 not found: ID does not exist" Feb 19 13:13:27 crc kubenswrapper[4861]: I0219 13:13:27.991470 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" path="/var/lib/kubelet/pods/685e7fb8-e8da-4f5a-87c5-424d1e12a6be/volumes" Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.400120 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" podUID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" containerName="oauth-openshift" containerID="cri-o://7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7" gracePeriod=15 Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.964976 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995157 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-service-ca\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995242 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-dir\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995324 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-error\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-provider-selection\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995461 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-policies\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995514 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-cliconfig\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995556 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-session\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995595 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95z78\" (UniqueName: \"kubernetes.io/projected/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-kube-api-access-95z78\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995643 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-ocp-branding-template\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995692 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-router-certs\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995750 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-serving-cert\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995784 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-idp-0-file-data\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995816 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-trusted-ca-bundle\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995858 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-login\") pod \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\" (UID: \"abc01b0c-a636-4dc5-bf6d-d2efde512ed5\") " Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.996524 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.996538 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.996605 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.995828 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:28 crc kubenswrapper[4861]: I0219 13:13:28.998556 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.011559 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.012039 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-kube-api-access-95z78" (OuterVolumeSpecName: "kube-api-access-95z78") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "kube-api-access-95z78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.013342 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.015282 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.020238 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.021090 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.021135 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.033163 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.033535 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "abc01b0c-a636-4dc5-bf6d-d2efde512ed5" (UID: "abc01b0c-a636-4dc5-bf6d-d2efde512ed5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.065943 4861 generic.go:334] "Generic (PLEG): container finished" podID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" containerID="7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7" exitCode=0 Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.066054 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.066077 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" event={"ID":"abc01b0c-a636-4dc5-bf6d-d2efde512ed5","Type":"ContainerDied","Data":"7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7"} Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.066653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6txts" event={"ID":"abc01b0c-a636-4dc5-bf6d-d2efde512ed5","Type":"ContainerDied","Data":"35f478897c5f47851ae7f9bb19f7b827a062c13015d12cdd61603e8ea6e0e3a9"} Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.066687 4861 scope.go:117] "RemoveContainer" containerID="7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.094068 4861 scope.go:117] "RemoveContainer" containerID="7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7" Feb 19 13:13:29 crc kubenswrapper[4861]: E0219 13:13:29.094857 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7\": container with ID starting with 7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7 not found: ID does not exist" containerID="7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.094986 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7"} err="failed to get container status \"7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7\": rpc error: code = NotFound desc = could not find container \"7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7\": container with ID starting with 7380283201e05389c8d8eec1e52217966d73a8c949218658182d672c1cae64a7 not found: ID does not exist" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.096985 4861 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.097109 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.097215 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.097365 4861 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.097673 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.097794 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.097916 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95z78\" (UniqueName: \"kubernetes.io/projected/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-kube-api-access-95z78\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098035 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098147 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098268 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098382 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098535 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098651 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.098795 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/abc01b0c-a636-4dc5-bf6d-d2efde512ed5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.117942 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6txts"] Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.123694 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6txts"] Feb 19 13:13:29 crc kubenswrapper[4861]: I0219 13:13:29.991740 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" path="/var/lib/kubelet/pods/abc01b0c-a636-4dc5-bf6d-d2efde512ed5/volumes" Feb 19 13:13:33 crc kubenswrapper[4861]: I0219 13:13:33.834727 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:13:33 crc kubenswrapper[4861]: I0219 13:13:33.834847 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:13:33 crc kubenswrapper[4861]: I0219 13:13:33.834944 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:13:33 crc kubenswrapper[4861]: I0219 13:13:33.836160 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:13:33 crc kubenswrapper[4861]: I0219 13:13:33.836298 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075" gracePeriod=600 Feb 19 13:13:34 crc kubenswrapper[4861]: I0219 13:13:34.096555 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075" exitCode=0 Feb 19 13:13:34 crc kubenswrapper[4861]: I0219 13:13:34.096636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075"} Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.105315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"b72bfeb4edac1369a1ca8bb2f270a11b14c4524bceae05c7740438dfa2d9f288"} Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.648945 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f89645989-7gn7w"] Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649197 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="registry-server" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649212 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="registry-server" Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649223 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="registry-server" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649231 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="registry-server" Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649250 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="extract-utilities" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649259 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="extract-utilities" Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649270 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="extract-content" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649279 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="extract-content" Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649295 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="extract-utilities" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649305 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="extract-utilities" Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649320 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" containerName="oauth-openshift" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649329 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" containerName="oauth-openshift" Feb 19 13:13:35 crc kubenswrapper[4861]: E0219 13:13:35.649340 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="extract-content" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649348 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="extract-content" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649495 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="685e7fb8-e8da-4f5a-87c5-424d1e12a6be" containerName="registry-server" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649511 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc01b0c-a636-4dc5-bf6d-d2efde512ed5" containerName="oauth-openshift" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.649522 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da26b508-7a00-4494-afc8-3da8b16eeaa7" containerName="registry-server" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.650009 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.653127 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.653469 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.653893 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.654504 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.656040 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.656205 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.656401 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.656671 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.656830 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.656984 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.657498 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.657693 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.665219 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f89645989-7gn7w"] Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.669107 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.676669 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686577 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686635 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686663 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686687 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686713 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-audit-policies\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686750 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-login\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686784 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686819 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-session\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686850 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11370fb9-ceec-4151-aa12-a966957ff294-audit-dir\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686872 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-error\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686905 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686942 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.686976 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.687012 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrgd\" (UniqueName: \"kubernetes.io/projected/11370fb9-ceec-4151-aa12-a966957ff294-kube-api-access-7xrgd\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.694912 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788549 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-session\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788603 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11370fb9-ceec-4151-aa12-a966957ff294-audit-dir\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788621 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-error\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788638 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788657 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788702 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrgd\" (UniqueName: \"kubernetes.io/projected/11370fb9-ceec-4151-aa12-a966957ff294-kube-api-access-7xrgd\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788727 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788745 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788762 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788782 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-audit-policies\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-login\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788839 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11370fb9-ceec-4151-aa12-a966957ff294-audit-dir\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.788861 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.789736 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.790278 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.790474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.791133 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11370fb9-ceec-4151-aa12-a966957ff294-audit-policies\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.795263 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.795775 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-error\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.795786 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.795926 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.796539 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.796993 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.798251 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-system-session\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.801776 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11370fb9-ceec-4151-aa12-a966957ff294-v4-0-config-user-template-login\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.806066 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrgd\" (UniqueName: \"kubernetes.io/projected/11370fb9-ceec-4151-aa12-a966957ff294-kube-api-access-7xrgd\") pod \"oauth-openshift-6f89645989-7gn7w\" (UID: \"11370fb9-ceec-4151-aa12-a966957ff294\") " pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:35 crc kubenswrapper[4861]: I0219 13:13:35.971973 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:36 crc kubenswrapper[4861]: I0219 13:13:36.399346 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f89645989-7gn7w"] Feb 19 13:13:36 crc kubenswrapper[4861]: W0219 13:13:36.406562 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11370fb9_ceec_4151_aa12_a966957ff294.slice/crio-c952062b6cf033d95221b3f544d479db71d0e3a3fc93b8181a3ce4009c445bfc WatchSource:0}: Error finding container c952062b6cf033d95221b3f544d479db71d0e3a3fc93b8181a3ce4009c445bfc: Status 404 returned error can't find the container with id c952062b6cf033d95221b3f544d479db71d0e3a3fc93b8181a3ce4009c445bfc Feb 19 13:13:36 crc kubenswrapper[4861]: I0219 13:13:36.992999 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd"] Feb 19 13:13:36 crc kubenswrapper[4861]: I0219 13:13:36.993244 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" podUID="e9c1b134-7177-43e4-9f8a-d77e828d638b" containerName="controller-manager" containerID="cri-o://bb14e5256bfabec3c86590a25457f1c1f8909b213da55924abf5c2f704b1e3c5" gracePeriod=30 Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.091810 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj"] Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.092409 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" podUID="a05c603c-8d91-4293-a4fe-39a0f0da7306" containerName="route-controller-manager" containerID="cri-o://fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c" gracePeriod=30 Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.123304 4861 generic.go:334] "Generic (PLEG): container finished" podID="e9c1b134-7177-43e4-9f8a-d77e828d638b" containerID="bb14e5256bfabec3c86590a25457f1c1f8909b213da55924abf5c2f704b1e3c5" exitCode=0 Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.123390 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" event={"ID":"e9c1b134-7177-43e4-9f8a-d77e828d638b","Type":"ContainerDied","Data":"bb14e5256bfabec3c86590a25457f1c1f8909b213da55924abf5c2f704b1e3c5"} Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.129801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" event={"ID":"11370fb9-ceec-4151-aa12-a966957ff294","Type":"ContainerStarted","Data":"a9a1b7c9ec8fe3c8bd646368e3b2be393c8f898a8264bceafa9f41f83cbf6683"} Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.129850 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" event={"ID":"11370fb9-ceec-4151-aa12-a966957ff294","Type":"ContainerStarted","Data":"c952062b6cf033d95221b3f544d479db71d0e3a3fc93b8181a3ce4009c445bfc"} Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.130198 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.135308 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.150888 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f89645989-7gn7w" podStartSLOduration=34.150869626 podStartE2EDuration="34.150869626s" podCreationTimestamp="2026-02-19 13:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:37.149078023 +0000 UTC m=+231.810181261" watchObservedRunningTime="2026-02-19 13:13:37.150869626 +0000 UTC m=+231.811972854" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.712730 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.738234 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.834221 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-config\") pod \"a05c603c-8d91-4293-a4fe-39a0f0da7306\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.834397 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a05c603c-8d91-4293-a4fe-39a0f0da7306-serving-cert\") pod \"a05c603c-8d91-4293-a4fe-39a0f0da7306\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.834462 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-client-ca\") pod \"e9c1b134-7177-43e4-9f8a-d77e828d638b\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.834506 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-client-ca\") pod \"a05c603c-8d91-4293-a4fe-39a0f0da7306\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.834563 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmfql\" (UniqueName: \"kubernetes.io/projected/a05c603c-8d91-4293-a4fe-39a0f0da7306-kube-api-access-jmfql\") pod \"a05c603c-8d91-4293-a4fe-39a0f0da7306\" (UID: \"a05c603c-8d91-4293-a4fe-39a0f0da7306\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.835095 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-config" (OuterVolumeSpecName: "config") pod "a05c603c-8d91-4293-a4fe-39a0f0da7306" (UID: "a05c603c-8d91-4293-a4fe-39a0f0da7306"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.835162 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-client-ca" (OuterVolumeSpecName: "client-ca") pod "a05c603c-8d91-4293-a4fe-39a0f0da7306" (UID: "a05c603c-8d91-4293-a4fe-39a0f0da7306"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.835715 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-client-ca" (OuterVolumeSpecName: "client-ca") pod "e9c1b134-7177-43e4-9f8a-d77e828d638b" (UID: "e9c1b134-7177-43e4-9f8a-d77e828d638b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.839183 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05c603c-8d91-4293-a4fe-39a0f0da7306-kube-api-access-jmfql" (OuterVolumeSpecName: "kube-api-access-jmfql") pod "a05c603c-8d91-4293-a4fe-39a0f0da7306" (UID: "a05c603c-8d91-4293-a4fe-39a0f0da7306"). InnerVolumeSpecName "kube-api-access-jmfql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.840992 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05c603c-8d91-4293-a4fe-39a0f0da7306-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a05c603c-8d91-4293-a4fe-39a0f0da7306" (UID: "a05c603c-8d91-4293-a4fe-39a0f0da7306"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935182 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c1b134-7177-43e4-9f8a-d77e828d638b-serving-cert\") pod \"e9c1b134-7177-43e4-9f8a-d77e828d638b\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935226 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-proxy-ca-bundles\") pod \"e9c1b134-7177-43e4-9f8a-d77e828d638b\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935278 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-config\") pod \"e9c1b134-7177-43e4-9f8a-d77e828d638b\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935299 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n47d\" (UniqueName: \"kubernetes.io/projected/e9c1b134-7177-43e4-9f8a-d77e828d638b-kube-api-access-4n47d\") pod \"e9c1b134-7177-43e4-9f8a-d77e828d638b\" (UID: \"e9c1b134-7177-43e4-9f8a-d77e828d638b\") " Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935627 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935641 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a05c603c-8d91-4293-a4fe-39a0f0da7306-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935649 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935657 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a05c603c-8d91-4293-a4fe-39a0f0da7306-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.935665 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmfql\" (UniqueName: \"kubernetes.io/projected/a05c603c-8d91-4293-a4fe-39a0f0da7306-kube-api-access-jmfql\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.936493 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e9c1b134-7177-43e4-9f8a-d77e828d638b" (UID: "e9c1b134-7177-43e4-9f8a-d77e828d638b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.938015 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-config" (OuterVolumeSpecName: "config") pod "e9c1b134-7177-43e4-9f8a-d77e828d638b" (UID: "e9c1b134-7177-43e4-9f8a-d77e828d638b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.939561 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c1b134-7177-43e4-9f8a-d77e828d638b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e9c1b134-7177-43e4-9f8a-d77e828d638b" (UID: "e9c1b134-7177-43e4-9f8a-d77e828d638b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:13:37 crc kubenswrapper[4861]: I0219 13:13:37.940162 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c1b134-7177-43e4-9f8a-d77e828d638b-kube-api-access-4n47d" (OuterVolumeSpecName: "kube-api-access-4n47d") pod "e9c1b134-7177-43e4-9f8a-d77e828d638b" (UID: "e9c1b134-7177-43e4-9f8a-d77e828d638b"). InnerVolumeSpecName "kube-api-access-4n47d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.036297 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c1b134-7177-43e4-9f8a-d77e828d638b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.036333 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.036343 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c1b134-7177-43e4-9f8a-d77e828d638b-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.036352 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n47d\" (UniqueName: \"kubernetes.io/projected/e9c1b134-7177-43e4-9f8a-d77e828d638b-kube-api-access-4n47d\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.138650 4861 generic.go:334] "Generic (PLEG): container finished" podID="a05c603c-8d91-4293-a4fe-39a0f0da7306" containerID="fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c" exitCode=0 Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.138777 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" event={"ID":"a05c603c-8d91-4293-a4fe-39a0f0da7306","Type":"ContainerDied","Data":"fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c"} Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.138842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" event={"ID":"a05c603c-8d91-4293-a4fe-39a0f0da7306","Type":"ContainerDied","Data":"274c86e76dbb7b7878e488519f5078ba899c34d55909584650fb7064968c7190"} Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.138873 4861 scope.go:117] "RemoveContainer" containerID="fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.139732 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.141839 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.141964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd" event={"ID":"e9c1b134-7177-43e4-9f8a-d77e828d638b","Type":"ContainerDied","Data":"65fee61b060e5a499ec36cbc12fe5c232e4cf8c049d16a71ff1101297a98d2c1"} Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.162153 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.166380 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69b4d5bcb5-hn4vd"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.172530 4861 scope.go:117] "RemoveContainer" containerID="fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c" Feb 19 13:13:38 crc kubenswrapper[4861]: E0219 13:13:38.173144 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c\": container with ID starting with fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c not found: ID does not exist" containerID="fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.173206 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c"} err="failed to get container status \"fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c\": rpc error: code = NotFound desc = could not find container \"fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c\": container with ID starting with fce5bab870b6bf8be3c0dcbda0646a2cd99488088ed666be4a94c1928bc74f7c not found: ID does not exist" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.173248 4861 scope.go:117] "RemoveContainer" containerID="bb14e5256bfabec3c86590a25457f1c1f8909b213da55924abf5c2f704b1e3c5" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.179273 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.184614 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599767f8f5-vplgj"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.650734 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz"] Feb 19 13:13:38 crc kubenswrapper[4861]: E0219 13:13:38.651446 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1b134-7177-43e4-9f8a-d77e828d638b" containerName="controller-manager" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.651465 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1b134-7177-43e4-9f8a-d77e828d638b" containerName="controller-manager" Feb 19 13:13:38 crc kubenswrapper[4861]: E0219 13:13:38.651490 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05c603c-8d91-4293-a4fe-39a0f0da7306" containerName="route-controller-manager" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.651498 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05c603c-8d91-4293-a4fe-39a0f0da7306" containerName="route-controller-manager" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.651630 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1b134-7177-43e4-9f8a-d77e828d638b" containerName="controller-manager" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.651649 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05c603c-8d91-4293-a4fe-39a0f0da7306" containerName="route-controller-manager" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.652175 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.655731 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.655889 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.655990 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.656105 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.656481 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.658517 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.659217 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5476965559-56kfg"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.660048 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.664237 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.664963 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.665572 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.665994 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.666792 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.667047 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.670069 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.675475 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.680934 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5476965559-56kfg"] Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751276 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbfd\" (UniqueName: \"kubernetes.io/projected/11060646-a073-4f0a-994d-d188842939f0-kube-api-access-kwbfd\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751344 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55c9j\" (UniqueName: \"kubernetes.io/projected/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-kube-api-access-55c9j\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-config\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-proxy-ca-bundles\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751477 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-client-ca\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751497 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11060646-a073-4f0a-994d-d188842939f0-config\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751729 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11060646-a073-4f0a-994d-d188842939f0-client-ca\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751828 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-serving-cert\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.751883 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11060646-a073-4f0a-994d-d188842939f0-serving-cert\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853204 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-client-ca\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853262 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11060646-a073-4f0a-994d-d188842939f0-config\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853303 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11060646-a073-4f0a-994d-d188842939f0-client-ca\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853335 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-serving-cert\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853362 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11060646-a073-4f0a-994d-d188842939f0-serving-cert\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbfd\" (UniqueName: \"kubernetes.io/projected/11060646-a073-4f0a-994d-d188842939f0-kube-api-access-kwbfd\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853465 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55c9j\" (UniqueName: \"kubernetes.io/projected/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-kube-api-access-55c9j\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853506 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-config\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.853528 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-proxy-ca-bundles\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.855037 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-client-ca\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.855256 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11060646-a073-4f0a-994d-d188842939f0-config\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.855342 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-config\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.855384 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11060646-a073-4f0a-994d-d188842939f0-client-ca\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.855520 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-proxy-ca-bundles\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.857703 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11060646-a073-4f0a-994d-d188842939f0-serving-cert\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.859325 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-serving-cert\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.874056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55c9j\" (UniqueName: \"kubernetes.io/projected/7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc-kube-api-access-55c9j\") pod \"controller-manager-5476965559-56kfg\" (UID: \"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc\") " pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.875955 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbfd\" (UniqueName: \"kubernetes.io/projected/11060646-a073-4f0a-994d-d188842939f0-kube-api-access-kwbfd\") pod \"route-controller-manager-5c96b8cc89-798fz\" (UID: \"11060646-a073-4f0a-994d-d188842939f0\") " pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:38 crc kubenswrapper[4861]: I0219 13:13:38.988359 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:39 crc kubenswrapper[4861]: I0219 13:13:39.004329 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:39 crc kubenswrapper[4861]: I0219 13:13:39.280030 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5476965559-56kfg"] Feb 19 13:13:39 crc kubenswrapper[4861]: I0219 13:13:39.437921 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz"] Feb 19 13:13:39 crc kubenswrapper[4861]: W0219 13:13:39.438915 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11060646_a073_4f0a_994d_d188842939f0.slice/crio-dd87061639897a02b05c0692c2d3234053d5daf36607346e4f15a02db2009f1f WatchSource:0}: Error finding container dd87061639897a02b05c0692c2d3234053d5daf36607346e4f15a02db2009f1f: Status 404 returned error can't find the container with id dd87061639897a02b05c0692c2d3234053d5daf36607346e4f15a02db2009f1f Feb 19 13:13:39 crc kubenswrapper[4861]: I0219 13:13:39.982837 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05c603c-8d91-4293-a4fe-39a0f0da7306" path="/var/lib/kubelet/pods/a05c603c-8d91-4293-a4fe-39a0f0da7306/volumes" Feb 19 13:13:39 crc kubenswrapper[4861]: I0219 13:13:39.983748 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c1b134-7177-43e4-9f8a-d77e828d638b" path="/var/lib/kubelet/pods/e9c1b134-7177-43e4-9f8a-d77e828d638b/volumes" Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.158349 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" event={"ID":"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc","Type":"ContainerStarted","Data":"429e6065f3e28e3e75bc8dd56393859b221e65ee7d1ae89923aa661913acec66"} Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.158396 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" event={"ID":"7ae4d4b2-ee1c-4150-84d7-57191d5cc4fc","Type":"ContainerStarted","Data":"4148bd98b35bea2d91d04288f753f06bacf884b67f6658e875f08b12e00449f3"} Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.158741 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.159972 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" event={"ID":"11060646-a073-4f0a-994d-d188842939f0","Type":"ContainerStarted","Data":"e31cb9d5f166301d46a67eac2ea7409d7592d136ebbfd51f065fdf38652423b4"} Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.160021 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" event={"ID":"11060646-a073-4f0a-994d-d188842939f0","Type":"ContainerStarted","Data":"dd87061639897a02b05c0692c2d3234053d5daf36607346e4f15a02db2009f1f"} Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.160279 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.163191 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.165115 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.187646 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5476965559-56kfg" podStartSLOduration=3.187631101 podStartE2EDuration="3.187631101s" podCreationTimestamp="2026-02-19 13:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:40.181174941 +0000 UTC m=+234.842278179" watchObservedRunningTime="2026-02-19 13:13:40.187631101 +0000 UTC m=+234.848734329" Feb 19 13:13:40 crc kubenswrapper[4861]: I0219 13:13:40.213387 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c96b8cc89-798fz" podStartSLOduration=3.213369371 podStartE2EDuration="3.213369371s" podCreationTimestamp="2026-02-19 13:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:13:40.211628419 +0000 UTC m=+234.872731657" watchObservedRunningTime="2026-02-19 13:13:40.213369371 +0000 UTC m=+234.874472599" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.203530 4861 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.204619 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.204659 4861 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205133 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d" gracePeriod=15 Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205204 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c" gracePeriod=15 Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205203 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f" gracePeriod=15 Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205296 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca" gracePeriod=15 Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205247 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b" gracePeriod=15 Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205700 4861 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.205926 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205941 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.205954 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205963 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.205975 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.205984 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.205996 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206004 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.206014 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206022 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.206034 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206042 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.206054 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206063 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.206076 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206086 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206204 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206218 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206232 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206242 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206255 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206270 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.206283 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.251522 4861 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284042 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284095 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284187 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284211 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284250 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284271 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.284298 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.303969 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca.scope\": RecentStats: unable to find data in memory cache]" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.384677 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.384986 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385048 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385105 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385125 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385141 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385218 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.384992 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385294 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385397 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385405 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385443 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385477 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.385558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: I0219 13:13:48.553082 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:48 crc kubenswrapper[4861]: E0219 13:13:48.581163 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895a81322b9794e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 13:13:48.580583758 +0000 UTC m=+243.241686986,LastTimestamp:2026-02-19 13:13:48.580583758 +0000 UTC m=+243.241686986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.217925 4861 generic.go:334] "Generic (PLEG): container finished" podID="86e0ea74-d593-456c-9d4a-b5487351acaa" containerID="2b35708c900bf83cb490807a1d2413d8ff551369b85043b64f1d13cc42f66ec0" exitCode=0 Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.217991 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86e0ea74-d593-456c-9d4a-b5487351acaa","Type":"ContainerDied","Data":"2b35708c900bf83cb490807a1d2413d8ff551369b85043b64f1d13cc42f66ec0"} Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.218531 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.218791 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.220779 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.222164 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.222702 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca" exitCode=0 Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.222723 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f" exitCode=0 Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.222731 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b" exitCode=0 Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.222739 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c" exitCode=2 Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.222780 4861 scope.go:117] "RemoveContainer" containerID="68906f0a2cf74b107faf27357eff73dd70cb1f2657ab4404cfb58f74fbd25702" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.224144 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d2ed6e3a0c85cb8ac1bab6d0b612a8beb2c79ae52bde00d33de2ef64c8b6870d"} Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.224164 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4b46bb01b404bd39cfab5f001062f76861c5c37377631b6da377042400008c43"} Feb 19 13:13:49 crc kubenswrapper[4861]: E0219 13:13:49.225540 4861 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.225874 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.226229 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.622129 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 13:13:49 crc kubenswrapper[4861]: I0219 13:13:49.622630 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.238132 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.563269 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.564300 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.564914 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.565201 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.615910 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616014 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616020 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616124 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616181 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616193 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616617 4861 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616704 4861 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.616729 4861 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.636874 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.637350 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.637667 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.717432 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e0ea74-d593-456c-9d4a-b5487351acaa-kube-api-access\") pod \"86e0ea74-d593-456c-9d4a-b5487351acaa\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.717478 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-kubelet-dir\") pod \"86e0ea74-d593-456c-9d4a-b5487351acaa\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.717517 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-var-lock\") pod \"86e0ea74-d593-456c-9d4a-b5487351acaa\" (UID: \"86e0ea74-d593-456c-9d4a-b5487351acaa\") " Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.717694 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-var-lock" (OuterVolumeSpecName: "var-lock") pod "86e0ea74-d593-456c-9d4a-b5487351acaa" (UID: "86e0ea74-d593-456c-9d4a-b5487351acaa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.717701 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "86e0ea74-d593-456c-9d4a-b5487351acaa" (UID: "86e0ea74-d593-456c-9d4a-b5487351acaa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.723304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e0ea74-d593-456c-9d4a-b5487351acaa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "86e0ea74-d593-456c-9d4a-b5487351acaa" (UID: "86e0ea74-d593-456c-9d4a-b5487351acaa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.818355 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86e0ea74-d593-456c-9d4a-b5487351acaa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.818382 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:50 crc kubenswrapper[4861]: I0219 13:13:50.818394 4861 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86e0ea74-d593-456c-9d4a-b5487351acaa-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.048329 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895a81322b9794e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 13:13:48.580583758 +0000 UTC m=+243.241686986,LastTimestamp:2026-02-19 13:13:48.580583758 +0000 UTC m=+243.241686986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.248173 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.249186 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d" exitCode=0 Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.249252 4861 scope.go:117] "RemoveContainer" containerID="8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.249360 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.252726 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86e0ea74-d593-456c-9d4a-b5487351acaa","Type":"ContainerDied","Data":"92bb2708ee5d175e3a2b836dff07a9ed4f0c1e3305dc4dc8d022e900fde844e1"} Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.252750 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bb2708ee5d175e3a2b836dff07a9ed4f0c1e3305dc4dc8d022e900fde844e1" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.252830 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.274144 4861 scope.go:117] "RemoveContainer" containerID="df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.274987 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.275470 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.275676 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.275940 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.294190 4861 scope.go:117] "RemoveContainer" containerID="fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.314390 4861 scope.go:117] "RemoveContainer" containerID="916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.334107 4861 scope.go:117] "RemoveContainer" containerID="40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.352759 4861 scope.go:117] "RemoveContainer" containerID="4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.371230 4861 scope.go:117] "RemoveContainer" containerID="8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.371649 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\": container with ID starting with 8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca not found: ID does not exist" containerID="8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.371685 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca"} err="failed to get container status \"8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\": rpc error: code = NotFound desc = could not find container \"8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca\": container with ID starting with 8765dbf3e3395fd37401a56e987d903e3f7af573db6962f9c404bfea8607f3ca not found: ID does not exist" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.371705 4861 scope.go:117] "RemoveContainer" containerID="df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.372117 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\": container with ID starting with df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f not found: ID does not exist" containerID="df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.372136 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f"} err="failed to get container status \"df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\": rpc error: code = NotFound desc = could not find container \"df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f\": container with ID starting with df86cece8ee04da6b77a38bcbed1ae8e711cb2670b794ff273a91152fdac177f not found: ID does not exist" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.372150 4861 scope.go:117] "RemoveContainer" containerID="fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.372317 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\": container with ID starting with fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b not found: ID does not exist" containerID="fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.372330 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b"} err="failed to get container status \"fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\": rpc error: code = NotFound desc = could not find container \"fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b\": container with ID starting with fcc0f87c4307c68ff0f543c093bee443afb6b0bbf6b65ced580636f6d6d2351b not found: ID does not exist" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.372341 4861 scope.go:117] "RemoveContainer" containerID="916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.372759 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\": container with ID starting with 916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c not found: ID does not exist" containerID="916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.372790 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c"} err="failed to get container status \"916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\": rpc error: code = NotFound desc = could not find container \"916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c\": container with ID starting with 916f05a52a45d5f191d6321eb78d456cc6ea2d5c64d1a950226e8017eb24a34c not found: ID does not exist" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.372803 4861 scope.go:117] "RemoveContainer" containerID="40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.373128 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\": container with ID starting with 40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d not found: ID does not exist" containerID="40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.373182 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d"} err="failed to get container status \"40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\": rpc error: code = NotFound desc = could not find container \"40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d\": container with ID starting with 40eb63ce68f70b4fb0ede76815d60a74f54859d7385905e7c3b5f6b80f3a782d not found: ID does not exist" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.373215 4861 scope.go:117] "RemoveContainer" containerID="4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38" Feb 19 13:13:51 crc kubenswrapper[4861]: E0219 13:13:51.373471 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\": container with ID starting with 4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38 not found: ID does not exist" containerID="4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.373487 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38"} err="failed to get container status \"4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\": rpc error: code = NotFound desc = could not find container \"4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38\": container with ID starting with 4f7452eef2b0a252cdb02b9c8b755f0307b96d3d426c1322902637ec2f787e38 not found: ID does not exist" Feb 19 13:13:51 crc kubenswrapper[4861]: I0219 13:13:51.983006 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.009844 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.010235 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.010648 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.011191 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.011743 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:53 crc kubenswrapper[4861]: I0219 13:13:53.011793 4861 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.012116 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.212857 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Feb 19 13:13:53 crc kubenswrapper[4861]: E0219 13:13:53.613914 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Feb 19 13:13:54 crc kubenswrapper[4861]: E0219 13:13:54.414463 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Feb 19 13:13:55 crc kubenswrapper[4861]: I0219 13:13:55.982088 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:13:56 crc kubenswrapper[4861]: E0219 13:13:56.016107 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Feb 19 13:13:59 crc kubenswrapper[4861]: E0219 13:13:59.217053 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Feb 19 13:14:01 crc kubenswrapper[4861]: E0219 13:14:01.049548 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895a81322b9794e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 13:13:48.580583758 +0000 UTC m=+243.241686986,LastTimestamp:2026-02-19 13:13:48.580583758 +0000 UTC m=+243.241686986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.317644 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.317979 4861 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e" exitCode=1 Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.318247 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e"} Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.318898 4861 scope.go:117] "RemoveContainer" containerID="9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.319204 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.319999 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.978992 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.980453 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.980959 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.995310 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.995357 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:01 crc kubenswrapper[4861]: E0219 13:14:01.995947 4861 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:01 crc kubenswrapper[4861]: I0219 13:14:01.996709 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:02 crc kubenswrapper[4861]: W0219 13:14:02.018716 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-c08efa83faeeab8fa9587ce4fd973a9a505b1423d4aadb03072f53acc2171d12 WatchSource:0}: Error finding container c08efa83faeeab8fa9587ce4fd973a9a505b1423d4aadb03072f53acc2171d12: Status 404 returned error can't find the container with id c08efa83faeeab8fa9587ce4fd973a9a505b1423d4aadb03072f53acc2171d12 Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.323953 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c08efa83faeeab8fa9587ce4fd973a9a505b1423d4aadb03072f53acc2171d12"} Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.327010 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.327074 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0503e4c753479dd398c7ce00867b323d45c26754ee1f4d316e68de93ae3a15d8"} Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.327966 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.328498 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.500585 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.500991 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 13:14:02 crc kubenswrapper[4861]: I0219 13:14:02.501040 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 13:14:03 crc kubenswrapper[4861]: I0219 13:14:03.336679 4861 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="526c647e9a683e07c8df93fc2d44fd3dc1adaaab8d0601267c809e940dd18eef" exitCode=0 Feb 19 13:14:03 crc kubenswrapper[4861]: I0219 13:14:03.336784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"526c647e9a683e07c8df93fc2d44fd3dc1adaaab8d0601267c809e940dd18eef"} Feb 19 13:14:03 crc kubenswrapper[4861]: I0219 13:14:03.337473 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:03 crc kubenswrapper[4861]: I0219 13:14:03.337555 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:03 crc kubenswrapper[4861]: I0219 13:14:03.337858 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:03 crc kubenswrapper[4861]: I0219 13:14:03.338171 4861 status_manager.go:851] "Failed to get status for pod" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 19 13:14:03 crc kubenswrapper[4861]: E0219 13:14:03.338373 4861 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:04 crc kubenswrapper[4861]: I0219 13:14:04.348720 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f91b0753a62d255213d7c82ec8b636ecc1e26ee09ce87ece134ae92c5915bb13"} Feb 19 13:14:04 crc kubenswrapper[4861]: I0219 13:14:04.349077 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2c0078f0dca4ec34e4faa52f034dcc105540e84c4fddbaf6922d5725b3680d2a"} Feb 19 13:14:04 crc kubenswrapper[4861]: I0219 13:14:04.349092 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3b3c5b4d8bf4efe841f89e13ef094f0ede28cd89001ae46a10ff1d765748e58f"} Feb 19 13:14:05 crc kubenswrapper[4861]: I0219 13:14:05.357525 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7bf50a5ca5ea5a21235b97e20c23f7cbb4f99bc510166566a42d976315190c00"} Feb 19 13:14:05 crc kubenswrapper[4861]: I0219 13:14:05.357860 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:05 crc kubenswrapper[4861]: I0219 13:14:05.357876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fa60eac7eb05ea26bc4af608d8056316e526d78ffec37eb5250e5a52565fe238"} Feb 19 13:14:05 crc kubenswrapper[4861]: I0219 13:14:05.357805 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:05 crc kubenswrapper[4861]: I0219 13:14:05.357898 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:06 crc kubenswrapper[4861]: I0219 13:14:06.098568 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:14:06 crc kubenswrapper[4861]: I0219 13:14:06.997469 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:06 crc kubenswrapper[4861]: I0219 13:14:06.997517 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:07 crc kubenswrapper[4861]: I0219 13:14:07.001812 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:10 crc kubenswrapper[4861]: I0219 13:14:10.368486 4861 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:10 crc kubenswrapper[4861]: I0219 13:14:10.418129 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7d9bdf7b-f312-4a8b-8fbc-30608d5ace46" Feb 19 13:14:11 crc kubenswrapper[4861]: I0219 13:14:11.387892 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:11 crc kubenswrapper[4861]: I0219 13:14:11.388208 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:11 crc kubenswrapper[4861]: I0219 13:14:11.390099 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7d9bdf7b-f312-4a8b-8fbc-30608d5ace46" Feb 19 13:14:12 crc kubenswrapper[4861]: I0219 13:14:12.500925 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 13:14:12 crc kubenswrapper[4861]: I0219 13:14:12.501016 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 13:14:20 crc kubenswrapper[4861]: I0219 13:14:20.018885 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 13:14:20 crc kubenswrapper[4861]: I0219 13:14:20.427494 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 13:14:20 crc kubenswrapper[4861]: I0219 13:14:20.931587 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.401358 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.407868 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.510877 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.651863 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.714772 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.853918 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.909213 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 13:14:21 crc kubenswrapper[4861]: I0219 13:14:21.927497 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.056330 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.231082 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.254708 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.346200 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.373133 4861 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.435206 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.500794 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.500868 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.500936 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.501975 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"0503e4c753479dd398c7ce00867b323d45c26754ee1f4d316e68de93ae3a15d8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.502168 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://0503e4c753479dd398c7ce00867b323d45c26754ee1f4d316e68de93ae3a15d8" gracePeriod=30 Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.754846 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 13:14:22 crc kubenswrapper[4861]: I0219 13:14:22.988007 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:22.994712 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.051441 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.275402 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.301203 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.336122 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.517141 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.574110 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.613348 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.768646 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.817952 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.948775 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.983609 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 13:14:23 crc kubenswrapper[4861]: I0219 13:14:23.993833 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.187222 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.276945 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.342822 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.391279 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.483363 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.535283 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.538337 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.631320 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.703993 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.722157 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.739766 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.741527 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.772137 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.864772 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 13:14:24 crc kubenswrapper[4861]: I0219 13:14:24.918484 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.006680 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.037548 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.136417 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.161256 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.249475 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.330832 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.352056 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.352859 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.367162 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.384498 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.446771 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.457023 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.479168 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.489105 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.499563 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.549067 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.597547 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.637916 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.686817 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.712260 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.718232 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.831601 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.909036 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 13:14:25 crc kubenswrapper[4861]: I0219 13:14:25.973925 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.076752 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.098519 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.110544 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.148009 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.162906 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.197981 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.237685 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.262252 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.269948 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.270192 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.291155 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.345173 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.391336 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.446750 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.627618 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.632620 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.843867 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.847367 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 13:14:26 crc kubenswrapper[4861]: I0219 13:14:26.924248 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.003409 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.037976 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.049072 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.083073 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.110270 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.113079 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.131755 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.138777 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.185967 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.211583 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.226092 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.240288 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.284884 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.288331 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.307085 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.330269 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.335497 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.370083 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.491535 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.542204 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.562066 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.625128 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.627134 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.682932 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.691907 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.745309 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.934546 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.945106 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.948482 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 13:14:27 crc kubenswrapper[4861]: I0219 13:14:27.951506 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.093119 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.138259 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.253085 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.325217 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.392469 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.404893 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.508544 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.547533 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.556559 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.593820 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.637848 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.652089 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.713466 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.785124 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.812844 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.832158 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.886929 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.914182 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.946406 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.947982 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.968225 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 13:14:28 crc kubenswrapper[4861]: I0219 13:14:28.993321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.012986 4861 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.110946 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.272857 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.274254 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.358919 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.468008 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.519737 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.550636 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.635312 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.670655 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.722070 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.773575 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.833966 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.873852 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.897044 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.955153 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 13:14:29 crc kubenswrapper[4861]: I0219 13:14:29.968122 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.308410 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.310501 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.318709 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.367140 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.392947 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.407718 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.408020 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.502290 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.582343 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.591812 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.723631 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.725394 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.726027 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.766837 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.844767 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.939236 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 13:14:30 crc kubenswrapper[4861]: I0219 13:14:30.984642 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.074710 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.116298 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.152497 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.206329 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.249159 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.274799 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.324402 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.324913 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.361330 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.389816 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.449126 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.453384 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.475898 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.670056 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.682051 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.812998 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.842364 4861 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.850205 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.850298 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.850968 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.851015 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4599206a-22ea-4e74-acf8-fe2814bd0e7b" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.856897 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.860655 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.874026 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.87400611 podStartE2EDuration="21.87400611s" podCreationTimestamp="2026-02-19 13:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:14:31.871529016 +0000 UTC m=+286.532632254" watchObservedRunningTime="2026-02-19 13:14:31.87400611 +0000 UTC m=+286.535109338" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.909709 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.958708 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.965077 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 13:14:31 crc kubenswrapper[4861]: I0219 13:14:31.988208 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.045023 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.046872 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.069202 4861 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.112993 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.146071 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.301690 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.333039 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.339282 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.371241 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.440549 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.566072 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.610366 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.610771 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.632764 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.634068 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.698815 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.841349 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.853186 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.899145 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 13:14:32 crc kubenswrapper[4861]: I0219 13:14:32.944713 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.000628 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.028661 4861 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.028962 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d2ed6e3a0c85cb8ac1bab6d0b612a8beb2c79ae52bde00d33de2ef64c8b6870d" gracePeriod=5 Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.037053 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.051989 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.070265 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.109659 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.170666 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.223520 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.331052 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.338905 4861 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.410648 4861 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.414088 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.585643 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.589144 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.669459 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.729391 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.744905 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.826224 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.832784 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.851829 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 13:14:33 crc kubenswrapper[4861]: I0219 13:14:33.917805 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 13:14:34 crc kubenswrapper[4861]: I0219 13:14:34.005865 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 13:14:34 crc kubenswrapper[4861]: I0219 13:14:34.010408 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 13:14:34 crc kubenswrapper[4861]: I0219 13:14:34.229117 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 13:14:34 crc kubenswrapper[4861]: I0219 13:14:34.629534 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 13:14:34 crc kubenswrapper[4861]: I0219 13:14:34.652883 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 13:14:34 crc kubenswrapper[4861]: I0219 13:14:34.870743 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 13:14:35 crc kubenswrapper[4861]: I0219 13:14:35.307534 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 13:14:35 crc kubenswrapper[4861]: I0219 13:14:35.469779 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 13:14:35 crc kubenswrapper[4861]: I0219 13:14:35.761853 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 13:14:36 crc kubenswrapper[4861]: I0219 13:14:36.228239 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 13:14:36 crc kubenswrapper[4861]: I0219 13:14:36.729827 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 13:14:36 crc kubenswrapper[4861]: I0219 13:14:36.750798 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 13:14:36 crc kubenswrapper[4861]: I0219 13:14:36.982792 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.558505 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.558630 4861 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d2ed6e3a0c85cb8ac1bab6d0b612a8beb2c79ae52bde00d33de2ef64c8b6870d" exitCode=137 Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.626068 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.626190 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754558 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754667 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754723 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754848 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754926 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.755000 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.754936 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.755055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.755675 4861 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.755706 4861 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.755730 4861 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.755755 4861 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.768985 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:14:38 crc kubenswrapper[4861]: I0219 13:14:38.857930 4861 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:39 crc kubenswrapper[4861]: I0219 13:14:39.565924 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 13:14:39 crc kubenswrapper[4861]: I0219 13:14:39.566000 4861 scope.go:117] "RemoveContainer" containerID="d2ed6e3a0c85cb8ac1bab6d0b612a8beb2c79ae52bde00d33de2ef64c8b6870d" Feb 19 13:14:39 crc kubenswrapper[4861]: I0219 13:14:39.566034 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 13:14:39 crc kubenswrapper[4861]: I0219 13:14:39.988517 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 13:14:45 crc kubenswrapper[4861]: I0219 13:14:45.761682 4861 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.650185 4861 generic.go:334] "Generic (PLEG): container finished" podID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerID="c88eb6c5983f5719b7427630164b540b61a7cb5dc082b7a09881c8e2de81af41" exitCode=0 Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.650256 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" event={"ID":"aa335a0a-4c08-4593-8c73-e0c2adeb76b7","Type":"ContainerDied","Data":"c88eb6c5983f5719b7427630164b540b61a7cb5dc082b7a09881c8e2de81af41"} Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.652954 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.653522 4861 scope.go:117] "RemoveContainer" containerID="c88eb6c5983f5719b7427630164b540b61a7cb5dc082b7a09881c8e2de81af41" Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.655122 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.655157 4861 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0503e4c753479dd398c7ce00867b323d45c26754ee1f4d316e68de93ae3a15d8" exitCode=137 Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.655195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0503e4c753479dd398c7ce00867b323d45c26754ee1f4d316e68de93ae3a15d8"} Feb 19 13:14:52 crc kubenswrapper[4861]: I0219 13:14:52.655254 4861 scope.go:117] "RemoveContainer" containerID="9c5fd8410b11786f5a73891467365a4c0d16f42b023a57afa36988304491172e" Feb 19 13:14:53 crc kubenswrapper[4861]: I0219 13:14:53.583827 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 13:14:53 crc kubenswrapper[4861]: I0219 13:14:53.663984 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 13:14:53 crc kubenswrapper[4861]: I0219 13:14:53.665726 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a181a79d3228822ff957ecea0349ad27e5c821d331fd3d2f15edbe60cc24322b"} Feb 19 13:14:53 crc kubenswrapper[4861]: I0219 13:14:53.669657 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" event={"ID":"aa335a0a-4c08-4593-8c73-e0c2adeb76b7","Type":"ContainerStarted","Data":"e608b4d982b6cbbe8129d7a6b47d0dca529b40a5d926d2ac3bb346b02944696a"} Feb 19 13:14:53 crc kubenswrapper[4861]: I0219 13:14:53.670024 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:14:53 crc kubenswrapper[4861]: I0219 13:14:53.673769 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:14:56 crc kubenswrapper[4861]: I0219 13:14:56.098877 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:14:57 crc kubenswrapper[4861]: I0219 13:14:57.762647 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 13:15:02 crc kubenswrapper[4861]: I0219 13:15:02.501188 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:15:02 crc kubenswrapper[4861]: I0219 13:15:02.507642 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:15:03 crc kubenswrapper[4861]: I0219 13:15:03.225376 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.211857 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp"] Feb 19 13:15:18 crc kubenswrapper[4861]: E0219 13:15:18.212610 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.212626 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 13:15:18 crc kubenswrapper[4861]: E0219 13:15:18.212651 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" containerName="installer" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.212660 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" containerName="installer" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.212774 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e0ea74-d593-456c-9d4a-b5487351acaa" containerName="installer" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.212789 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.213201 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.215785 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.216272 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.242068 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp"] Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.285336 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxhn7\" (UniqueName: \"kubernetes.io/projected/e5761329-3f84-46d6-a0de-ca8addac06ec-kube-api-access-pxhn7\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.285452 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5761329-3f84-46d6-a0de-ca8addac06ec-secret-volume\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.285478 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5761329-3f84-46d6-a0de-ca8addac06ec-config-volume\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.386171 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxhn7\" (UniqueName: \"kubernetes.io/projected/e5761329-3f84-46d6-a0de-ca8addac06ec-kube-api-access-pxhn7\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.386275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5761329-3f84-46d6-a0de-ca8addac06ec-secret-volume\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.386302 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5761329-3f84-46d6-a0de-ca8addac06ec-config-volume\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.387394 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5761329-3f84-46d6-a0de-ca8addac06ec-config-volume\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.397376 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5761329-3f84-46d6-a0de-ca8addac06ec-secret-volume\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.423340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxhn7\" (UniqueName: \"kubernetes.io/projected/e5761329-3f84-46d6-a0de-ca8addac06ec-kube-api-access-pxhn7\") pod \"collect-profiles-29525115-zjkrp\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.542433 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:18 crc kubenswrapper[4861]: I0219 13:15:18.943064 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp"] Feb 19 13:15:19 crc kubenswrapper[4861]: I0219 13:15:19.335284 4861 generic.go:334] "Generic (PLEG): container finished" podID="e5761329-3f84-46d6-a0de-ca8addac06ec" containerID="35a0405bec97e5b45fc6f77e5f05f5fcd2997c41954e61034249a3429e2b1db4" exitCode=0 Feb 19 13:15:19 crc kubenswrapper[4861]: I0219 13:15:19.335337 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" event={"ID":"e5761329-3f84-46d6-a0de-ca8addac06ec","Type":"ContainerDied","Data":"35a0405bec97e5b45fc6f77e5f05f5fcd2997c41954e61034249a3429e2b1db4"} Feb 19 13:15:19 crc kubenswrapper[4861]: I0219 13:15:19.335373 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" event={"ID":"e5761329-3f84-46d6-a0de-ca8addac06ec","Type":"ContainerStarted","Data":"b885d45a3641095af6262ea5c1909297b7aae4bd457c41fc71c0b5a27471a801"} Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.682288 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.721482 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5761329-3f84-46d6-a0de-ca8addac06ec-secret-volume\") pod \"e5761329-3f84-46d6-a0de-ca8addac06ec\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.721557 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxhn7\" (UniqueName: \"kubernetes.io/projected/e5761329-3f84-46d6-a0de-ca8addac06ec-kube-api-access-pxhn7\") pod \"e5761329-3f84-46d6-a0de-ca8addac06ec\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.721740 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5761329-3f84-46d6-a0de-ca8addac06ec-config-volume\") pod \"e5761329-3f84-46d6-a0de-ca8addac06ec\" (UID: \"e5761329-3f84-46d6-a0de-ca8addac06ec\") " Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.722826 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5761329-3f84-46d6-a0de-ca8addac06ec-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5761329-3f84-46d6-a0de-ca8addac06ec" (UID: "e5761329-3f84-46d6-a0de-ca8addac06ec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.729668 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5761329-3f84-46d6-a0de-ca8addac06ec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5761329-3f84-46d6-a0de-ca8addac06ec" (UID: "e5761329-3f84-46d6-a0de-ca8addac06ec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.729769 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5761329-3f84-46d6-a0de-ca8addac06ec-kube-api-access-pxhn7" (OuterVolumeSpecName: "kube-api-access-pxhn7") pod "e5761329-3f84-46d6-a0de-ca8addac06ec" (UID: "e5761329-3f84-46d6-a0de-ca8addac06ec"). InnerVolumeSpecName "kube-api-access-pxhn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.824999 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5761329-3f84-46d6-a0de-ca8addac06ec-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.825074 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5761329-3f84-46d6-a0de-ca8addac06ec-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:20 crc kubenswrapper[4861]: I0219 13:15:20.825101 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxhn7\" (UniqueName: \"kubernetes.io/projected/e5761329-3f84-46d6-a0de-ca8addac06ec-kube-api-access-pxhn7\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:21 crc kubenswrapper[4861]: I0219 13:15:21.354035 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" event={"ID":"e5761329-3f84-46d6-a0de-ca8addac06ec","Type":"ContainerDied","Data":"b885d45a3641095af6262ea5c1909297b7aae4bd457c41fc71c0b5a27471a801"} Feb 19 13:15:21 crc kubenswrapper[4861]: I0219 13:15:21.354123 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b885d45a3641095af6262ea5c1909297b7aae4bd457c41fc71c0b5a27471a801" Feb 19 13:15:21 crc kubenswrapper[4861]: I0219 13:15:21.354263 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp" Feb 19 13:16:02 crc kubenswrapper[4861]: I0219 13:16:02.913733 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hmr4j"] Feb 19 13:16:02 crc kubenswrapper[4861]: E0219 13:16:02.915048 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5761329-3f84-46d6-a0de-ca8addac06ec" containerName="collect-profiles" Feb 19 13:16:02 crc kubenswrapper[4861]: I0219 13:16:02.915070 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5761329-3f84-46d6-a0de-ca8addac06ec" containerName="collect-profiles" Feb 19 13:16:02 crc kubenswrapper[4861]: I0219 13:16:02.915215 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5761329-3f84-46d6-a0de-ca8addac06ec" containerName="collect-profiles" Feb 19 13:16:02 crc kubenswrapper[4861]: I0219 13:16:02.915937 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:02 crc kubenswrapper[4861]: I0219 13:16:02.935819 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hmr4j"] Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.066806 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48lrc\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-kube-api-access-48lrc\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.066890 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c35ebdf-b776-44b1-b29f-94c52011a838-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.066921 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c35ebdf-b776-44b1-b29f-94c52011a838-registry-certificates\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.066975 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-registry-tls\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.067011 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c35ebdf-b776-44b1-b29f-94c52011a838-trusted-ca\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.067052 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c35ebdf-b776-44b1-b29f-94c52011a838-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.067108 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.067173 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-bound-sa-token\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.097144 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169006 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-registry-tls\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169062 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c35ebdf-b776-44b1-b29f-94c52011a838-trusted-ca\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169088 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c35ebdf-b776-44b1-b29f-94c52011a838-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169146 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-bound-sa-token\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48lrc\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-kube-api-access-48lrc\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c35ebdf-b776-44b1-b29f-94c52011a838-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.169224 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c35ebdf-b776-44b1-b29f-94c52011a838-registry-certificates\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.170719 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c35ebdf-b776-44b1-b29f-94c52011a838-trusted-ca\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.170746 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c35ebdf-b776-44b1-b29f-94c52011a838-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.171174 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c35ebdf-b776-44b1-b29f-94c52011a838-registry-certificates\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.178051 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-registry-tls\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.178270 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c35ebdf-b776-44b1-b29f-94c52011a838-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.187147 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-bound-sa-token\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.199207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48lrc\" (UniqueName: \"kubernetes.io/projected/5c35ebdf-b776-44b1-b29f-94c52011a838-kube-api-access-48lrc\") pod \"image-registry-66df7c8f76-hmr4j\" (UID: \"5c35ebdf-b776-44b1-b29f-94c52011a838\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.234266 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.743996 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hmr4j"] Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.834636 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:16:03 crc kubenswrapper[4861]: I0219 13:16:03.834716 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:16:04 crc kubenswrapper[4861]: I0219 13:16:04.704606 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" event={"ID":"5c35ebdf-b776-44b1-b29f-94c52011a838","Type":"ContainerStarted","Data":"5bca6980c41919b06c62270ce9a572ac6fbc6c44277118642da83c679ba52044"} Feb 19 13:16:04 crc kubenswrapper[4861]: I0219 13:16:04.705105 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" event={"ID":"5c35ebdf-b776-44b1-b29f-94c52011a838","Type":"ContainerStarted","Data":"01c8f7c1552784ab1842e106f8b508dda319a2618bc98d2ea24b6df6fa569048"} Feb 19 13:16:04 crc kubenswrapper[4861]: I0219 13:16:04.705132 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:04 crc kubenswrapper[4861]: I0219 13:16:04.771874 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" podStartSLOduration=2.77185208 podStartE2EDuration="2.77185208s" podCreationTimestamp="2026-02-19 13:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:16:04.76608173 +0000 UTC m=+379.427184958" watchObservedRunningTime="2026-02-19 13:16:04.77185208 +0000 UTC m=+379.432955308" Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.835249 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khrzg"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.836464 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-khrzg" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="registry-server" containerID="cri-o://22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0" gracePeriod=30 Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.860973 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5nqk"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.861367 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5nqk" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="registry-server" containerID="cri-o://74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe" gracePeriod=30 Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.867320 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwjzc"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.867614 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" containerID="cri-o://e608b4d982b6cbbe8129d7a6b47d0dca529b40a5d926d2ac3bb346b02944696a" gracePeriod=30 Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.877230 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lhzw"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.877821 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2lhzw" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="registry-server" containerID="cri-o://daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f" gracePeriod=30 Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.879773 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcqf7"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.880206 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kcqf7" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="registry-server" containerID="cri-o://8e84fa1060cb842d24672baa58b080884bd9f1a88e4ef83cddaf0df561d3812d" gracePeriod=30 Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.899474 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wbs67"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.900206 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.915809 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wbs67"] Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.956377 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.956979 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcp9\" (UniqueName: \"kubernetes.io/projected/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-kube-api-access-cbcp9\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:08 crc kubenswrapper[4861]: I0219 13:16:08.957007 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.058251 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.058494 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcp9\" (UniqueName: \"kubernetes.io/projected/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-kube-api-access-cbcp9\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.058532 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.060915 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.067579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.078220 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcp9\" (UniqueName: \"kubernetes.io/projected/39ba35a5-cc11-42a3-ab71-d4744a6d5cf0-kube-api-access-cbcp9\") pod \"marketplace-operator-79b997595-wbs67\" (UID: \"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: I0219 13:16:09.228987 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:09 crc kubenswrapper[4861]: E0219 13:16:09.480161 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:09 crc kubenswrapper[4861]: E0219 13:16:09.481684 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:09 crc kubenswrapper[4861]: E0219 13:16:09.483941 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:09 crc kubenswrapper[4861]: E0219 13:16:09.483998 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-q5nqk" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="registry-server" Feb 19 13:16:10 crc kubenswrapper[4861]: E0219 13:16:10.047842 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0 is running failed: container process not found" containerID="22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:10 crc kubenswrapper[4861]: E0219 13:16:10.048860 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0 is running failed: container process not found" containerID="22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:10 crc kubenswrapper[4861]: E0219 13:16:10.049294 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0 is running failed: container process not found" containerID="22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:10 crc kubenswrapper[4861]: E0219 13:16:10.049330 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-khrzg" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="registry-server" Feb 19 13:16:10 crc kubenswrapper[4861]: I0219 13:16:10.157467 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wbs67"] Feb 19 13:16:10 crc kubenswrapper[4861]: I0219 13:16:10.770338 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" event={"ID":"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0","Type":"ContainerStarted","Data":"251874289ca209411f8cabad13ca3e367b8e8963ed0f6b6e8606c6bb86d65b7b"} Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.782064 4861 generic.go:334] "Generic (PLEG): container finished" podID="617be892-2391-43d5-94d0-c0600d0c66a0" containerID="8e84fa1060cb842d24672baa58b080884bd9f1a88e4ef83cddaf0df561d3812d" exitCode=0 Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.782167 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcqf7" event={"ID":"617be892-2391-43d5-94d0-c0600d0c66a0","Type":"ContainerDied","Data":"8e84fa1060cb842d24672baa58b080884bd9f1a88e4ef83cddaf0df561d3812d"} Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.786761 4861 generic.go:334] "Generic (PLEG): container finished" podID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerID="daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f" exitCode=0 Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.786969 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lhzw" event={"ID":"bafa1b1c-66a3-42f6-8a14-4a272b2ac176","Type":"ContainerDied","Data":"daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f"} Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.789751 4861 generic.go:334] "Generic (PLEG): container finished" podID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerID="e608b4d982b6cbbe8129d7a6b47d0dca529b40a5d926d2ac3bb346b02944696a" exitCode=0 Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.789851 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" event={"ID":"aa335a0a-4c08-4593-8c73-e0c2adeb76b7","Type":"ContainerDied","Data":"e608b4d982b6cbbe8129d7a6b47d0dca529b40a5d926d2ac3bb346b02944696a"} Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.789885 4861 scope.go:117] "RemoveContainer" containerID="c88eb6c5983f5719b7427630164b540b61a7cb5dc082b7a09881c8e2de81af41" Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.794394 4861 generic.go:334] "Generic (PLEG): container finished" podID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerID="22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0" exitCode=0 Feb 19 13:16:11 crc kubenswrapper[4861]: I0219 13:16:11.794636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khrzg" event={"ID":"0eb10abc-c209-4a6b-8fc8-39973ed75fd6","Type":"ContainerDied","Data":"22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0"} Feb 19 13:16:11 crc kubenswrapper[4861]: E0219 13:16:11.794642 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f is running failed: container process not found" containerID="daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:11 crc kubenswrapper[4861]: E0219 13:16:11.795505 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f is running failed: container process not found" containerID="daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:11 crc kubenswrapper[4861]: E0219 13:16:11.795845 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f is running failed: container process not found" containerID="daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 13:16:11 crc kubenswrapper[4861]: E0219 13:16:11.795883 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-2lhzw" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="registry-server" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.509228 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.527531 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.587882 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.600117 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617570 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdx8j\" (UniqueName: \"kubernetes.io/projected/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-kube-api-access-wdx8j\") pod \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617611 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6n7\" (UniqueName: \"kubernetes.io/projected/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-kube-api-access-pm6n7\") pod \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617640 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-utilities\") pod \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617665 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2wbr\" (UniqueName: \"kubernetes.io/projected/617be892-2391-43d5-94d0-c0600d0c66a0-kube-api-access-c2wbr\") pod \"617be892-2391-43d5-94d0-c0600d0c66a0\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617710 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-catalog-content\") pod \"617be892-2391-43d5-94d0-c0600d0c66a0\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617758 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-operator-metrics\") pod \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617795 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-trusted-ca\") pod \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\" (UID: \"aa335a0a-4c08-4593-8c73-e0c2adeb76b7\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617812 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-utilities\") pod \"617be892-2391-43d5-94d0-c0600d0c66a0\" (UID: \"617be892-2391-43d5-94d0-c0600d0c66a0\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.617833 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-catalog-content\") pod \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\" (UID: \"0eb10abc-c209-4a6b-8fc8-39973ed75fd6\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.625619 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-utilities" (OuterVolumeSpecName: "utilities") pod "617be892-2391-43d5-94d0-c0600d0c66a0" (UID: "617be892-2391-43d5-94d0-c0600d0c66a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.625853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aa335a0a-4c08-4593-8c73-e0c2adeb76b7" (UID: "aa335a0a-4c08-4593-8c73-e0c2adeb76b7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.626017 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-utilities" (OuterVolumeSpecName: "utilities") pod "0eb10abc-c209-4a6b-8fc8-39973ed75fd6" (UID: "0eb10abc-c209-4a6b-8fc8-39973ed75fd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.645230 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617be892-2391-43d5-94d0-c0600d0c66a0-kube-api-access-c2wbr" (OuterVolumeSpecName: "kube-api-access-c2wbr") pod "617be892-2391-43d5-94d0-c0600d0c66a0" (UID: "617be892-2391-43d5-94d0-c0600d0c66a0"). InnerVolumeSpecName "kube-api-access-c2wbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.645664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aa335a0a-4c08-4593-8c73-e0c2adeb76b7" (UID: "aa335a0a-4c08-4593-8c73-e0c2adeb76b7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.654768 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-kube-api-access-pm6n7" (OuterVolumeSpecName: "kube-api-access-pm6n7") pod "0eb10abc-c209-4a6b-8fc8-39973ed75fd6" (UID: "0eb10abc-c209-4a6b-8fc8-39973ed75fd6"). InnerVolumeSpecName "kube-api-access-pm6n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.655268 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-kube-api-access-wdx8j" (OuterVolumeSpecName: "kube-api-access-wdx8j") pod "aa335a0a-4c08-4593-8c73-e0c2adeb76b7" (UID: "aa335a0a-4c08-4593-8c73-e0c2adeb76b7"). InnerVolumeSpecName "kube-api-access-wdx8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.683488 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eb10abc-c209-4a6b-8fc8-39973ed75fd6" (UID: "0eb10abc-c209-4a6b-8fc8-39973ed75fd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.718700 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-utilities\") pod \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.718749 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-catalog-content\") pod \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.718778 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtlzc\" (UniqueName: \"kubernetes.io/projected/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-kube-api-access-wtlzc\") pod \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\" (UID: \"bafa1b1c-66a3-42f6-8a14-4a272b2ac176\") " Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719017 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdx8j\" (UniqueName: \"kubernetes.io/projected/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-kube-api-access-wdx8j\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719030 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6n7\" (UniqueName: \"kubernetes.io/projected/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-kube-api-access-pm6n7\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719041 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719051 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2wbr\" (UniqueName: \"kubernetes.io/projected/617be892-2391-43d5-94d0-c0600d0c66a0-kube-api-access-c2wbr\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719061 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719071 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa335a0a-4c08-4593-8c73-e0c2adeb76b7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719079 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.719087 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb10abc-c209-4a6b-8fc8-39973ed75fd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.720323 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-utilities" (OuterVolumeSpecName: "utilities") pod "bafa1b1c-66a3-42f6-8a14-4a272b2ac176" (UID: "bafa1b1c-66a3-42f6-8a14-4a272b2ac176"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.722087 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-kube-api-access-wtlzc" (OuterVolumeSpecName: "kube-api-access-wtlzc") pod "bafa1b1c-66a3-42f6-8a14-4a272b2ac176" (UID: "bafa1b1c-66a3-42f6-8a14-4a272b2ac176"). InnerVolumeSpecName "kube-api-access-wtlzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.755359 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bafa1b1c-66a3-42f6-8a14-4a272b2ac176" (UID: "bafa1b1c-66a3-42f6-8a14-4a272b2ac176"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.778569 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "617be892-2391-43d5-94d0-c0600d0c66a0" (UID: "617be892-2391-43d5-94d0-c0600d0c66a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.802457 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" event={"ID":"aa335a0a-4c08-4593-8c73-e0c2adeb76b7","Type":"ContainerDied","Data":"597cb3ba1887f8128bd2f2d22157c79728bed152b35c4d1d8e563be98ae4eabd"} Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.802537 4861 scope.go:117] "RemoveContainer" containerID="e608b4d982b6cbbe8129d7a6b47d0dca529b40a5d926d2ac3bb346b02944696a" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.802658 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwjzc" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.811676 4861 generic.go:334] "Generic (PLEG): container finished" podID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerID="74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe" exitCode=0 Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.811783 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerDied","Data":"74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe"} Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.816226 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcqf7" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.816251 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcqf7" event={"ID":"617be892-2391-43d5-94d0-c0600d0c66a0","Type":"ContainerDied","Data":"599297806b3e8d5004dc2159028d53fd2253b1d7ed82de00caa594193406cbf2"} Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.819909 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/617be892-2391-43d5-94d0-c0600d0c66a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.819938 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.819948 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.819957 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtlzc\" (UniqueName: \"kubernetes.io/projected/bafa1b1c-66a3-42f6-8a14-4a272b2ac176-kube-api-access-wtlzc\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.820596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lhzw" event={"ID":"bafa1b1c-66a3-42f6-8a14-4a272b2ac176","Type":"ContainerDied","Data":"050df1f70a5de4022c4651921835fbb1a7335a818fc23bd7336b959ca4cc9cd3"} Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.820716 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lhzw" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.823818 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" event={"ID":"39ba35a5-cc11-42a3-ab71-d4744a6d5cf0","Type":"ContainerStarted","Data":"de077e6f15d2d7066989b42a35689fc346a522b110c255784db70d4e414ab66c"} Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.828575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khrzg" event={"ID":"0eb10abc-c209-4a6b-8fc8-39973ed75fd6","Type":"ContainerDied","Data":"507378f21944f35c930e4c582caeac3832fc17c6b733e684748cd0e964be6123"} Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.828881 4861 scope.go:117] "RemoveContainer" containerID="8e84fa1060cb842d24672baa58b080884bd9f1a88e4ef83cddaf0df561d3812d" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.829347 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khrzg" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.845812 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" podStartSLOduration=4.845785962 podStartE2EDuration="4.845785962s" podCreationTimestamp="2026-02-19 13:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:16:12.845586906 +0000 UTC m=+387.506690144" watchObservedRunningTime="2026-02-19 13:16:12.845785962 +0000 UTC m=+387.506889180" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.861457 4861 scope.go:117] "RemoveContainer" containerID="0ffb2104b8170ce2a2c0822f15082cc7a61e07545e70dfc122963df9da49cdcf" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.863132 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwjzc"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.866909 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwjzc"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.878142 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lhzw"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.890495 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lhzw"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.893385 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khrzg"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.896984 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-khrzg"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.901658 4861 scope.go:117] "RemoveContainer" containerID="69554819c4587f2da76dad4b98850802cf2d6105002b88325d545bf45cc00c38" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.904128 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcqf7"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.908598 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kcqf7"] Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.917192 4861 scope.go:117] "RemoveContainer" containerID="daae03f6dc24fc0c8eeb6e03211ba508e6c872311e0922a9d32d2049c5962d8f" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.931169 4861 scope.go:117] "RemoveContainer" containerID="306cea667c9ac54f5998af75f11d468a981c16a6907c351fdd8f41f0326518e9" Feb 19 13:16:12 crc kubenswrapper[4861]: I0219 13:16:12.945463 4861 scope.go:117] "RemoveContainer" containerID="7c41808a278dac682d727e34db5d7a6f40d4c10eadeea5c09ce79145019c043f" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.000386 4861 scope.go:117] "RemoveContainer" containerID="22a59ee3c97524e651e79a531aaabe3b69d0c212ddc71ef5b37c5a3558371be0" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.077942 4861 scope.go:117] "RemoveContainer" containerID="68309d33a413c702fcb94934567359ad09b83cdf7d5a185cc0caf1370d403a92" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.119219 4861 scope.go:117] "RemoveContainer" containerID="d8d0ae7b4e56b8f3ef7f3055161b3cc096a202c59bac544ca0327ec7f71b5169" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.218275 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.333799 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-catalog-content\") pod \"4e4630be-249d-4f67-bd5c-eafaf08b2705\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.333911 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llxw4\" (UniqueName: \"kubernetes.io/projected/4e4630be-249d-4f67-bd5c-eafaf08b2705-kube-api-access-llxw4\") pod \"4e4630be-249d-4f67-bd5c-eafaf08b2705\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.334001 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-utilities\") pod \"4e4630be-249d-4f67-bd5c-eafaf08b2705\" (UID: \"4e4630be-249d-4f67-bd5c-eafaf08b2705\") " Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.335527 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-utilities" (OuterVolumeSpecName: "utilities") pod "4e4630be-249d-4f67-bd5c-eafaf08b2705" (UID: "4e4630be-249d-4f67-bd5c-eafaf08b2705"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.337704 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4630be-249d-4f67-bd5c-eafaf08b2705-kube-api-access-llxw4" (OuterVolumeSpecName: "kube-api-access-llxw4") pod "4e4630be-249d-4f67-bd5c-eafaf08b2705" (UID: "4e4630be-249d-4f67-bd5c-eafaf08b2705"). InnerVolumeSpecName "kube-api-access-llxw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.400518 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e4630be-249d-4f67-bd5c-eafaf08b2705" (UID: "4e4630be-249d-4f67-bd5c-eafaf08b2705"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.435799 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.435860 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llxw4\" (UniqueName: \"kubernetes.io/projected/4e4630be-249d-4f67-bd5c-eafaf08b2705-kube-api-access-llxw4\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.435878 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4630be-249d-4f67-bd5c-eafaf08b2705-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.847314 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5nqk" event={"ID":"4e4630be-249d-4f67-bd5c-eafaf08b2705","Type":"ContainerDied","Data":"3782f300b7a3799e9277e49c30b61bb88ef1eddfeb472783a4930829364e7f5c"} Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.847470 4861 scope.go:117] "RemoveContainer" containerID="74d44eb84b06eada739b94e48d6b630dad7157b6dde6cba33603f52e68ffc8fe" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.848026 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5nqk" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.851929 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.857592 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wbs67" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.890192 4861 scope.go:117] "RemoveContainer" containerID="c6b780148cd892dd5ce810b54b251139570de1393b9af5852503812099a8ea0b" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.909342 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5nqk"] Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.915480 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5nqk"] Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.924526 4861 scope.go:117] "RemoveContainer" containerID="2746feb52f8ae30acbb30333b7a9ddbccd1aed44f689a5fb598be19a6b2d2c3a" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.984459 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" path="/var/lib/kubelet/pods/0eb10abc-c209-4a6b-8fc8-39973ed75fd6/volumes" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.985595 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" path="/var/lib/kubelet/pods/4e4630be-249d-4f67-bd5c-eafaf08b2705/volumes" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.986152 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" path="/var/lib/kubelet/pods/617be892-2391-43d5-94d0-c0600d0c66a0/volumes" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.987227 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" path="/var/lib/kubelet/pods/aa335a0a-4c08-4593-8c73-e0c2adeb76b7/volumes" Feb 19 13:16:13 crc kubenswrapper[4861]: I0219 13:16:13.987811 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" path="/var/lib/kubelet/pods/bafa1b1c-66a3-42f6-8a14-4a272b2ac176/volumes" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.250901 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7kzn2"] Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251091 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251104 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251114 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251123 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251134 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251141 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251149 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251156 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251166 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251173 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251182 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251188 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251196 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251201 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251209 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251214 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251220 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251225 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251235 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251241 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251251 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251257 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251263 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251269 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251279 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251284 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="extract-utilities" Feb 19 13:16:15 crc kubenswrapper[4861]: E0219 13:16:15.251291 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251297 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="extract-content" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251447 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251462 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafa1b1c-66a3-42f6-8a14-4a272b2ac176" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251469 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4630be-249d-4f67-bd5c-eafaf08b2705" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251481 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="617be892-2391-43d5-94d0-c0600d0c66a0" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251491 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb10abc-c209-4a6b-8fc8-39973ed75fd6" containerName="registry-server" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.251678 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa335a0a-4c08-4593-8c73-e0c2adeb76b7" containerName="marketplace-operator" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.252227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.257830 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.262998 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kzn2"] Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.366069 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e3c2cc-204d-45e5-bef8-5dd819f69a20-catalog-content\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.366137 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2r8t\" (UniqueName: \"kubernetes.io/projected/16e3c2cc-204d-45e5-bef8-5dd819f69a20-kube-api-access-j2r8t\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.366188 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e3c2cc-204d-45e5-bef8-5dd819f69a20-utilities\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.444113 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvj85"] Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.445125 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.450335 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.458689 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvj85"] Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.467688 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41fdccb-e760-4654-b1b6-f3f31d71f474-catalog-content\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.467736 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmzg\" (UniqueName: \"kubernetes.io/projected/a41fdccb-e760-4654-b1b6-f3f31d71f474-kube-api-access-mrmzg\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.467800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e3c2cc-204d-45e5-bef8-5dd819f69a20-catalog-content\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.467838 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2r8t\" (UniqueName: \"kubernetes.io/projected/16e3c2cc-204d-45e5-bef8-5dd819f69a20-kube-api-access-j2r8t\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.467873 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41fdccb-e760-4654-b1b6-f3f31d71f474-utilities\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.467896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e3c2cc-204d-45e5-bef8-5dd819f69a20-utilities\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.468559 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e3c2cc-204d-45e5-bef8-5dd819f69a20-utilities\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.470330 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e3c2cc-204d-45e5-bef8-5dd819f69a20-catalog-content\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.494791 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2r8t\" (UniqueName: \"kubernetes.io/projected/16e3c2cc-204d-45e5-bef8-5dd819f69a20-kube-api-access-j2r8t\") pod \"redhat-marketplace-7kzn2\" (UID: \"16e3c2cc-204d-45e5-bef8-5dd819f69a20\") " pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.569652 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41fdccb-e760-4654-b1b6-f3f31d71f474-catalog-content\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.569729 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrmzg\" (UniqueName: \"kubernetes.io/projected/a41fdccb-e760-4654-b1b6-f3f31d71f474-kube-api-access-mrmzg\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.569775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41fdccb-e760-4654-b1b6-f3f31d71f474-utilities\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.570319 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41fdccb-e760-4654-b1b6-f3f31d71f474-catalog-content\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.571945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41fdccb-e760-4654-b1b6-f3f31d71f474-utilities\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.588165 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrmzg\" (UniqueName: \"kubernetes.io/projected/a41fdccb-e760-4654-b1b6-f3f31d71f474-kube-api-access-mrmzg\") pod \"redhat-operators-pvj85\" (UID: \"a41fdccb-e760-4654-b1b6-f3f31d71f474\") " pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.621171 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.769878 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:15 crc kubenswrapper[4861]: I0219 13:16:15.876768 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kzn2"] Feb 19 13:16:15 crc kubenswrapper[4861]: W0219 13:16:15.893469 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e3c2cc_204d_45e5_bef8_5dd819f69a20.slice/crio-70654b53a26a6ab8f541553c29f7e2340eab9dff2bebdb8cff22c261cd7c26ec WatchSource:0}: Error finding container 70654b53a26a6ab8f541553c29f7e2340eab9dff2bebdb8cff22c261cd7c26ec: Status 404 returned error can't find the container with id 70654b53a26a6ab8f541553c29f7e2340eab9dff2bebdb8cff22c261cd7c26ec Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.002226 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvj85"] Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.887327 4861 generic.go:334] "Generic (PLEG): container finished" podID="16e3c2cc-204d-45e5-bef8-5dd819f69a20" containerID="af3aca309c33f9d9b2d96cf54e2b3c56ed48b052d8428072073c866d301252e4" exitCode=0 Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.887847 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kzn2" event={"ID":"16e3c2cc-204d-45e5-bef8-5dd819f69a20","Type":"ContainerDied","Data":"af3aca309c33f9d9b2d96cf54e2b3c56ed48b052d8428072073c866d301252e4"} Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.887917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kzn2" event={"ID":"16e3c2cc-204d-45e5-bef8-5dd819f69a20","Type":"ContainerStarted","Data":"70654b53a26a6ab8f541553c29f7e2340eab9dff2bebdb8cff22c261cd7c26ec"} Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.890668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvj85" event={"ID":"a41fdccb-e760-4654-b1b6-f3f31d71f474","Type":"ContainerDied","Data":"74ab401bfaa0784933e5a5a83aacaff67c411011fa7e66abef056b5568ed3378"} Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.890515 4861 generic.go:334] "Generic (PLEG): container finished" podID="a41fdccb-e760-4654-b1b6-f3f31d71f474" containerID="74ab401bfaa0784933e5a5a83aacaff67c411011fa7e66abef056b5568ed3378" exitCode=0 Feb 19 13:16:16 crc kubenswrapper[4861]: I0219 13:16:16.891813 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvj85" event={"ID":"a41fdccb-e760-4654-b1b6-f3f31d71f474","Type":"ContainerStarted","Data":"b41ca388da06b1943b7a18a94a33d52547e637c18d8f4c892ab485032b5faa6d"} Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.649057 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkqzt"] Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.651158 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.654878 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.663055 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkqzt"] Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.732228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-catalog-content\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.732322 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnxb\" (UniqueName: \"kubernetes.io/projected/400ed650-9346-403d-a70f-27012222dc66-kube-api-access-zmnxb\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.732371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-utilities\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.834183 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-utilities\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.834281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-catalog-content\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.834324 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmnxb\" (UniqueName: \"kubernetes.io/projected/400ed650-9346-403d-a70f-27012222dc66-kube-api-access-zmnxb\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.835123 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-utilities\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.835360 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-catalog-content\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.851343 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6b6bs"] Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.852738 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.855395 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.860839 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6b6bs"] Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.881702 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmnxb\" (UniqueName: \"kubernetes.io/projected/400ed650-9346-403d-a70f-27012222dc66-kube-api-access-zmnxb\") pod \"certified-operators-mkqzt\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.900499 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kzn2" event={"ID":"16e3c2cc-204d-45e5-bef8-5dd819f69a20","Type":"ContainerStarted","Data":"1e609380fad03008572e7230ec879ca6a6e200509f319daf6873ccb98aa71b8e"} Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.939659 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqdz\" (UniqueName: \"kubernetes.io/projected/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-kube-api-access-dkqdz\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.939898 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-catalog-content\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:17 crc kubenswrapper[4861]: I0219 13:16:17.940029 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-utilities\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.041901 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqdz\" (UniqueName: \"kubernetes.io/projected/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-kube-api-access-dkqdz\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.042020 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-catalog-content\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.042070 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-utilities\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.042633 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-utilities\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.042667 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-catalog-content\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.049271 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.063391 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqdz\" (UniqueName: \"kubernetes.io/projected/11627588-ac57-42e4-9fc1-e01ecfd7ccd8-kube-api-access-dkqdz\") pod \"community-operators-6b6bs\" (UID: \"11627588-ac57-42e4-9fc1-e01ecfd7ccd8\") " pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.169649 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.244217 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkqzt"] Feb 19 13:16:18 crc kubenswrapper[4861]: W0219 13:16:18.250210 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400ed650_9346_403d_a70f_27012222dc66.slice/crio-af537c87900f130d267ba143cf0a2a90d8f43d18caa81611bc62badc935c37d5 WatchSource:0}: Error finding container af537c87900f130d267ba143cf0a2a90d8f43d18caa81611bc62badc935c37d5: Status 404 returned error can't find the container with id af537c87900f130d267ba143cf0a2a90d8f43d18caa81611bc62badc935c37d5 Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.410367 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6b6bs"] Feb 19 13:16:18 crc kubenswrapper[4861]: W0219 13:16:18.495347 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11627588_ac57_42e4_9fc1_e01ecfd7ccd8.slice/crio-8c2b5b31b864c3c00ce77ceb7d541fdba8ebf2f07cb362af7410aebcabcd36d6 WatchSource:0}: Error finding container 8c2b5b31b864c3c00ce77ceb7d541fdba8ebf2f07cb362af7410aebcabcd36d6: Status 404 returned error can't find the container with id 8c2b5b31b864c3c00ce77ceb7d541fdba8ebf2f07cb362af7410aebcabcd36d6 Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.913167 4861 generic.go:334] "Generic (PLEG): container finished" podID="11627588-ac57-42e4-9fc1-e01ecfd7ccd8" containerID="20e8ae5d09547fecc34352fa5db8ecc8999e07955c88212052a27787c7c42d44" exitCode=0 Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.913237 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6b6bs" event={"ID":"11627588-ac57-42e4-9fc1-e01ecfd7ccd8","Type":"ContainerDied","Data":"20e8ae5d09547fecc34352fa5db8ecc8999e07955c88212052a27787c7c42d44"} Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.913684 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6b6bs" event={"ID":"11627588-ac57-42e4-9fc1-e01ecfd7ccd8","Type":"ContainerStarted","Data":"8c2b5b31b864c3c00ce77ceb7d541fdba8ebf2f07cb362af7410aebcabcd36d6"} Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.916130 4861 generic.go:334] "Generic (PLEG): container finished" podID="a41fdccb-e760-4654-b1b6-f3f31d71f474" containerID="8b832bac025a657461bda70f619af38377d199315ebbdbff014917ee43bf8af2" exitCode=0 Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.916209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvj85" event={"ID":"a41fdccb-e760-4654-b1b6-f3f31d71f474","Type":"ContainerDied","Data":"8b832bac025a657461bda70f619af38377d199315ebbdbff014917ee43bf8af2"} Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.926776 4861 generic.go:334] "Generic (PLEG): container finished" podID="16e3c2cc-204d-45e5-bef8-5dd819f69a20" containerID="1e609380fad03008572e7230ec879ca6a6e200509f319daf6873ccb98aa71b8e" exitCode=0 Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.927238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kzn2" event={"ID":"16e3c2cc-204d-45e5-bef8-5dd819f69a20","Type":"ContainerDied","Data":"1e609380fad03008572e7230ec879ca6a6e200509f319daf6873ccb98aa71b8e"} Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.929556 4861 generic.go:334] "Generic (PLEG): container finished" podID="400ed650-9346-403d-a70f-27012222dc66" containerID="37973ffde48904e40fdf448303b0ae96ae33377d00d7056cbf8f5105e0df644b" exitCode=0 Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.929633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkqzt" event={"ID":"400ed650-9346-403d-a70f-27012222dc66","Type":"ContainerDied","Data":"37973ffde48904e40fdf448303b0ae96ae33377d00d7056cbf8f5105e0df644b"} Feb 19 13:16:18 crc kubenswrapper[4861]: I0219 13:16:18.929667 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkqzt" event={"ID":"400ed650-9346-403d-a70f-27012222dc66","Type":"ContainerStarted","Data":"af537c87900f130d267ba143cf0a2a90d8f43d18caa81611bc62badc935c37d5"} Feb 19 13:16:19 crc kubenswrapper[4861]: I0219 13:16:19.936980 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kzn2" event={"ID":"16e3c2cc-204d-45e5-bef8-5dd819f69a20","Type":"ContainerStarted","Data":"8d3159685604a78c02c49e3b9f57a4f8ebc79e2b78835450ce43c7d1d967b264"} Feb 19 13:16:19 crc kubenswrapper[4861]: I0219 13:16:19.939387 4861 generic.go:334] "Generic (PLEG): container finished" podID="400ed650-9346-403d-a70f-27012222dc66" containerID="03577f0d8c95aa010ad3f914c5e40ccd7a68d0ccb0232757d982daebf9d80b90" exitCode=0 Feb 19 13:16:19 crc kubenswrapper[4861]: I0219 13:16:19.939465 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkqzt" event={"ID":"400ed650-9346-403d-a70f-27012222dc66","Type":"ContainerDied","Data":"03577f0d8c95aa010ad3f914c5e40ccd7a68d0ccb0232757d982daebf9d80b90"} Feb 19 13:16:19 crc kubenswrapper[4861]: I0219 13:16:19.943122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvj85" event={"ID":"a41fdccb-e760-4654-b1b6-f3f31d71f474","Type":"ContainerStarted","Data":"281bbe2e96544d2cb59a3ecf3249684a5b0e1bd2dcacc143f4432e7644a98431"} Feb 19 13:16:19 crc kubenswrapper[4861]: I0219 13:16:19.982359 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7kzn2" podStartSLOduration=2.564521567 podStartE2EDuration="4.982333406s" podCreationTimestamp="2026-02-19 13:16:15 +0000 UTC" firstStartedPulling="2026-02-19 13:16:16.893932576 +0000 UTC m=+391.555035814" lastFinishedPulling="2026-02-19 13:16:19.311744425 +0000 UTC m=+393.972847653" observedRunningTime="2026-02-19 13:16:19.965517356 +0000 UTC m=+394.626620604" watchObservedRunningTime="2026-02-19 13:16:19.982333406 +0000 UTC m=+394.643436634" Feb 19 13:16:20 crc kubenswrapper[4861]: I0219 13:16:20.009370 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvj85" podStartSLOduration=2.602516094 podStartE2EDuration="5.009331653s" podCreationTimestamp="2026-02-19 13:16:15 +0000 UTC" firstStartedPulling="2026-02-19 13:16:16.893924756 +0000 UTC m=+391.555027994" lastFinishedPulling="2026-02-19 13:16:19.300740325 +0000 UTC m=+393.961843553" observedRunningTime="2026-02-19 13:16:19.998533996 +0000 UTC m=+394.659637224" watchObservedRunningTime="2026-02-19 13:16:20.009331653 +0000 UTC m=+394.670434891" Feb 19 13:16:20 crc kubenswrapper[4861]: I0219 13:16:20.953931 4861 generic.go:334] "Generic (PLEG): container finished" podID="11627588-ac57-42e4-9fc1-e01ecfd7ccd8" containerID="b7207693674238ca172c16b00d3ff0640e19439c9d990eab9138b7774f99972e" exitCode=0 Feb 19 13:16:20 crc kubenswrapper[4861]: I0219 13:16:20.954400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6b6bs" event={"ID":"11627588-ac57-42e4-9fc1-e01ecfd7ccd8","Type":"ContainerDied","Data":"b7207693674238ca172c16b00d3ff0640e19439c9d990eab9138b7774f99972e"} Feb 19 13:16:20 crc kubenswrapper[4861]: I0219 13:16:20.959798 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkqzt" event={"ID":"400ed650-9346-403d-a70f-27012222dc66","Type":"ContainerStarted","Data":"0533eb6f5499487c0171cd677c8ec3bd5eb015e7e22adbd9d3c2f36b61836c1c"} Feb 19 13:16:20 crc kubenswrapper[4861]: I0219 13:16:20.999913 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkqzt" podStartSLOduration=2.601097348 podStartE2EDuration="3.999896765s" podCreationTimestamp="2026-02-19 13:16:17 +0000 UTC" firstStartedPulling="2026-02-19 13:16:18.934307623 +0000 UTC m=+393.595410891" lastFinishedPulling="2026-02-19 13:16:20.33310709 +0000 UTC m=+394.994210308" observedRunningTime="2026-02-19 13:16:20.997741287 +0000 UTC m=+395.658844515" watchObservedRunningTime="2026-02-19 13:16:20.999896765 +0000 UTC m=+395.660999993" Feb 19 13:16:21 crc kubenswrapper[4861]: I0219 13:16:21.969654 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6b6bs" event={"ID":"11627588-ac57-42e4-9fc1-e01ecfd7ccd8","Type":"ContainerStarted","Data":"4b6a44f16ef7db68d46820f6d0957e9caf3d72e0a0f139baf5705316ee589cb0"} Feb 19 13:16:22 crc kubenswrapper[4861]: I0219 13:16:22.000174 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6b6bs" podStartSLOduration=2.559900685 podStartE2EDuration="5.000157212s" podCreationTimestamp="2026-02-19 13:16:17 +0000 UTC" firstStartedPulling="2026-02-19 13:16:18.917071665 +0000 UTC m=+393.578174903" lastFinishedPulling="2026-02-19 13:16:21.357328202 +0000 UTC m=+396.018431430" observedRunningTime="2026-02-19 13:16:21.997573718 +0000 UTC m=+396.658676976" watchObservedRunningTime="2026-02-19 13:16:22.000157212 +0000 UTC m=+396.661260450" Feb 19 13:16:23 crc kubenswrapper[4861]: I0219 13:16:23.239298 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hmr4j" Feb 19 13:16:23 crc kubenswrapper[4861]: I0219 13:16:23.310819 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6mmf"] Feb 19 13:16:25 crc kubenswrapper[4861]: I0219 13:16:25.622268 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:25 crc kubenswrapper[4861]: I0219 13:16:25.622863 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:25 crc kubenswrapper[4861]: I0219 13:16:25.682099 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:25 crc kubenswrapper[4861]: I0219 13:16:25.770257 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:25 crc kubenswrapper[4861]: I0219 13:16:25.770359 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:25 crc kubenswrapper[4861]: I0219 13:16:25.826057 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:26 crc kubenswrapper[4861]: I0219 13:16:26.061519 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvj85" Feb 19 13:16:26 crc kubenswrapper[4861]: I0219 13:16:26.063111 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7kzn2" Feb 19 13:16:28 crc kubenswrapper[4861]: I0219 13:16:28.050212 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:28 crc kubenswrapper[4861]: I0219 13:16:28.050735 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:28 crc kubenswrapper[4861]: I0219 13:16:28.118815 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:28 crc kubenswrapper[4861]: I0219 13:16:28.170594 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:28 crc kubenswrapper[4861]: I0219 13:16:28.170684 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:28 crc kubenswrapper[4861]: I0219 13:16:28.231311 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:29 crc kubenswrapper[4861]: I0219 13:16:29.064906 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6b6bs" Feb 19 13:16:29 crc kubenswrapper[4861]: I0219 13:16:29.076530 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 13:16:33 crc kubenswrapper[4861]: I0219 13:16:33.834890 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:16:33 crc kubenswrapper[4861]: I0219 13:16:33.835240 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.355554 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" podUID="7425897a-821a-4293-8d9c-3b0c5744bbc9" containerName="registry" containerID="cri-o://fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117" gracePeriod=30 Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.815818 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.915368 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7425897a-821a-4293-8d9c-3b0c5744bbc9-ca-trust-extracted\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.915574 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-bound-sa-token\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.915679 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-tls\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.915737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-certificates\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.915816 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-trusted-ca\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.915884 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7425897a-821a-4293-8d9c-3b0c5744bbc9-installation-pull-secrets\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.916151 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.916272 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsknd\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-kube-api-access-jsknd\") pod \"7425897a-821a-4293-8d9c-3b0c5744bbc9\" (UID: \"7425897a-821a-4293-8d9c-3b0c5744bbc9\") " Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.917258 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.917515 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.925495 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-kube-api-access-jsknd" (OuterVolumeSpecName: "kube-api-access-jsknd") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "kube-api-access-jsknd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.926590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.928008 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.928797 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7425897a-821a-4293-8d9c-3b0c5744bbc9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.936578 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 13:16:48 crc kubenswrapper[4861]: I0219 13:16:48.955983 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7425897a-821a-4293-8d9c-3b0c5744bbc9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7425897a-821a-4293-8d9c-3b0c5744bbc9" (UID: "7425897a-821a-4293-8d9c-3b0c5744bbc9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018302 4861 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7425897a-821a-4293-8d9c-3b0c5744bbc9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018340 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018353 4861 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018368 4861 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018383 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7425897a-821a-4293-8d9c-3b0c5744bbc9-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018395 4861 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7425897a-821a-4293-8d9c-3b0c5744bbc9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.018407 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsknd\" (UniqueName: \"kubernetes.io/projected/7425897a-821a-4293-8d9c-3b0c5744bbc9-kube-api-access-jsknd\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.178923 4861 generic.go:334] "Generic (PLEG): container finished" podID="7425897a-821a-4293-8d9c-3b0c5744bbc9" containerID="fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117" exitCode=0 Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.179000 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" event={"ID":"7425897a-821a-4293-8d9c-3b0c5744bbc9","Type":"ContainerDied","Data":"fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117"} Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.179007 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.179067 4861 scope.go:117] "RemoveContainer" containerID="fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.179049 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6mmf" event={"ID":"7425897a-821a-4293-8d9c-3b0c5744bbc9","Type":"ContainerDied","Data":"7182a4cdd379db0322309694c791818fe73f7d80d412cb0109eec815221b874c"} Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.221922 4861 scope.go:117] "RemoveContainer" containerID="fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117" Feb 19 13:16:49 crc kubenswrapper[4861]: E0219 13:16:49.226442 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117\": container with ID starting with fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117 not found: ID does not exist" containerID="fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.226522 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117"} err="failed to get container status \"fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117\": rpc error: code = NotFound desc = could not find container \"fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117\": container with ID starting with fab3ba2b5f7ef9a385a26da79f068f830d295f100e34a82ae6fe6aa46a521117 not found: ID does not exist" Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.233711 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6mmf"] Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.245840 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6mmf"] Feb 19 13:16:49 crc kubenswrapper[4861]: I0219 13:16:49.990018 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7425897a-821a-4293-8d9c-3b0c5744bbc9" path="/var/lib/kubelet/pods/7425897a-821a-4293-8d9c-3b0c5744bbc9/volumes" Feb 19 13:17:03 crc kubenswrapper[4861]: I0219 13:17:03.835017 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:17:03 crc kubenswrapper[4861]: I0219 13:17:03.835928 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:17:03 crc kubenswrapper[4861]: I0219 13:17:03.836011 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:17:03 crc kubenswrapper[4861]: I0219 13:17:03.837148 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b72bfeb4edac1369a1ca8bb2f270a11b14c4524bceae05c7740438dfa2d9f288"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:17:03 crc kubenswrapper[4861]: I0219 13:17:03.837331 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://b72bfeb4edac1369a1ca8bb2f270a11b14c4524bceae05c7740438dfa2d9f288" gracePeriod=600 Feb 19 13:17:04 crc kubenswrapper[4861]: I0219 13:17:04.320324 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="b72bfeb4edac1369a1ca8bb2f270a11b14c4524bceae05c7740438dfa2d9f288" exitCode=0 Feb 19 13:17:04 crc kubenswrapper[4861]: I0219 13:17:04.320500 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"b72bfeb4edac1369a1ca8bb2f270a11b14c4524bceae05c7740438dfa2d9f288"} Feb 19 13:17:04 crc kubenswrapper[4861]: I0219 13:17:04.321073 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"25ef2498d1603371d170c7ba58d926ede3a215c63cff356671e923ef191e3ea4"} Feb 19 13:17:04 crc kubenswrapper[4861]: I0219 13:17:04.321129 4861 scope.go:117] "RemoveContainer" containerID="53d3be8c7223de1afc6ea489198124bd0e8e42e993c89ce4084c8082aa6db075" Feb 19 13:19:33 crc kubenswrapper[4861]: I0219 13:19:33.834867 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:19:33 crc kubenswrapper[4861]: I0219 13:19:33.835868 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:20:03 crc kubenswrapper[4861]: I0219 13:20:03.834142 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:20:03 crc kubenswrapper[4861]: I0219 13:20:03.834788 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.834500 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.835211 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.835297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.836108 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25ef2498d1603371d170c7ba58d926ede3a215c63cff356671e923ef191e3ea4"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.836203 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://25ef2498d1603371d170c7ba58d926ede3a215c63cff356671e923ef191e3ea4" gracePeriod=600 Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.993544 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="25ef2498d1603371d170c7ba58d926ede3a215c63cff356671e923ef191e3ea4" exitCode=0 Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.993630 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"25ef2498d1603371d170c7ba58d926ede3a215c63cff356671e923ef191e3ea4"} Feb 19 13:20:33 crc kubenswrapper[4861]: I0219 13:20:33.993704 4861 scope.go:117] "RemoveContainer" containerID="b72bfeb4edac1369a1ca8bb2f270a11b14c4524bceae05c7740438dfa2d9f288" Feb 19 13:20:35 crc kubenswrapper[4861]: I0219 13:20:35.003393 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"172ce433d46e388504efbd8038cf7a4f97b7e544c89545b0b9a675e189350528"} Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.700908 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wb9bn"] Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.702816 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-controller" containerID="cri-o://4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.702912 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="northd" containerID="cri-o://77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.702977 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="sbdb" containerID="cri-o://b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.703078 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="nbdb" containerID="cri-o://13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.703103 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.703183 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-acl-logging" containerID="cri-o://e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.702943 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-node" containerID="cri-o://6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" gracePeriod=30 Feb 19 13:22:19 crc kubenswrapper[4861]: I0219 13:22:19.766520 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" containerID="cri-o://09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" gracePeriod=30 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.058007 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/3.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.061262 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovn-acl-logging/0.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.062027 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovn-controller/0.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.062847 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.148863 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kfk79"] Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149192 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7425897a-821a-4293-8d9c-3b0c5744bbc9" containerName="registry" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149217 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7425897a-821a-4293-8d9c-3b0c5744bbc9" containerName="registry" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149233 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149246 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149269 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149283 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149298 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149309 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149324 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="northd" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149335 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="northd" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149352 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-node" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149756 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-node" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149802 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="nbdb" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149815 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="nbdb" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149833 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149848 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149864 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149877 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149896 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="sbdb" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149907 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="sbdb" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149924 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kubecfg-setup" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149937 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kubecfg-setup" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.149951 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-acl-logging" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.149963 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-acl-logging" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150169 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-acl-logging" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150197 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="nbdb" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150212 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150231 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150245 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150258 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150275 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-node" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150292 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovn-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150304 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150324 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="sbdb" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150342 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7425897a-821a-4293-8d9c-3b0c5744bbc9" containerName="registry" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150359 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="northd" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.150540 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150563 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: E0219 13:22:20.150586 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150599 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.150766 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerName="ovnkube-controller" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.154092 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.187511 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-ovn\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.187689 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.187880 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-etc-openvswitch\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.187926 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-kubelet\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.187957 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-config\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.187975 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-var-lib-openvswitch\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188017 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-script-lib\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188038 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-bin\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188055 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-netns\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188078 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188098 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-systemd-units\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188128 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-openvswitch\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188143 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-slash\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188173 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovn-node-metrics-cert\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188195 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-systemd\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188217 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-log-socket\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188234 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f9jb\" (UniqueName: \"kubernetes.io/projected/2b4f740d-a1ca-450f-adad-afb42efe0c76-kube-api-access-4f9jb\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188272 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188290 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-node-log\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188307 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188345 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-env-overrides\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188369 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-ovn-kubernetes\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188381 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188404 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-netd\") pod \"2b4f740d-a1ca-450f-adad-afb42efe0c76\" (UID: \"2b4f740d-a1ca-450f-adad-afb42efe0c76\") " Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188463 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188460 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-slash" (OuterVolumeSpecName: "host-slash") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188508 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188549 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188560 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188618 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188680 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-node-log\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188727 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-run-ovn-kubernetes\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2gc\" (UniqueName: \"kubernetes.io/projected/3f95f74b-44a5-44ed-9239-0a49c357fee9-kube-api-access-wz2gc\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188757 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-log-socket" (OuterVolumeSpecName: "log-socket") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188778 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovnkube-config\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188811 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-env-overrides\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188835 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-slash\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188856 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-cni-netd\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-node-log" (OuterVolumeSpecName: "node-log") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.188946 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189014 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-etc-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189106 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovn-node-metrics-cert\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189158 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-kubelet\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189195 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-var-lib-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189231 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-ovn\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189257 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-cni-bin\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189318 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-systemd\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189353 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-systemd-units\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189384 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovnkube-script-lib\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189962 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.189984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.190068 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.192335 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194597 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-run-netns\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194748 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-log-socket\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194791 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194876 4861 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194888 4861 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194899 4861 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194908 4861 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194918 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194927 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194936 4861 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194947 4861 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194956 4861 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194964 4861 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194976 4861 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194986 4861 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.194994 4861 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.195002 4861 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.195013 4861 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.195023 4861 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b4f740d-a1ca-450f-adad-afb42efe0c76-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.195031 4861 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.202165 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.206776 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4f740d-a1ca-450f-adad-afb42efe0c76-kube-api-access-4f9jb" (OuterVolumeSpecName: "kube-api-access-4f9jb") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "kube-api-access-4f9jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.208244 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2b4f740d-a1ca-450f-adad-afb42efe0c76" (UID: "2b4f740d-a1ca-450f-adad-afb42efe0c76"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296683 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296739 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296771 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-node-log\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296803 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2gc\" (UniqueName: \"kubernetes.io/projected/3f95f74b-44a5-44ed-9239-0a49c357fee9-kube-api-access-wz2gc\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296826 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-run-ovn-kubernetes\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296850 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovnkube-config\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-env-overrides\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-slash\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296909 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-cni-netd\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296901 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296940 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296963 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-run-ovn-kubernetes\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-etc-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297023 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-node-log\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296999 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-cni-netd\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.296933 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-etc-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297071 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-slash\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297299 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovn-node-metrics-cert\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297372 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-kubelet\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297442 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-var-lib-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-cni-bin\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297502 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-kubelet\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297529 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-ovn\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297574 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-ovn\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297621 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-var-lib-openvswitch\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297649 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-systemd\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297672 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-cni-bin\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297691 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-systemd-units\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297713 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovnkube-script-lib\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297718 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-run-systemd\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297768 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-run-netns\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297817 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-log-socket\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297934 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b4f740d-a1ca-450f-adad-afb42efe0c76-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-host-run-netns\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297977 4861 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b4f740d-a1ca-450f-adad-afb42efe0c76-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297980 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-log-socket\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.297954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f95f74b-44a5-44ed-9239-0a49c357fee9-systemd-units\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.298019 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f9jb\" (UniqueName: \"kubernetes.io/projected/2b4f740d-a1ca-450f-adad-afb42efe0c76-kube-api-access-4f9jb\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.298252 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-env-overrides\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.298273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovnkube-config\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.298553 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovnkube-script-lib\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.303870 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f95f74b-44a5-44ed-9239-0a49c357fee9-ovn-node-metrics-cert\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.328334 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2gc\" (UniqueName: \"kubernetes.io/projected/3f95f74b-44a5-44ed-9239-0a49c357fee9-kube-api-access-wz2gc\") pod \"ovnkube-node-kfk79\" (UID: \"3f95f74b-44a5-44ed-9239-0a49c357fee9\") " pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.472067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.808146 4861 generic.go:334] "Generic (PLEG): container finished" podID="3f95f74b-44a5-44ed-9239-0a49c357fee9" containerID="6061d3c3ffedf42e5405eac5df67e601699c71f6ce0d821d3860df2769fe7c26" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.808206 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerDied","Data":"6061d3c3ffedf42e5405eac5df67e601699c71f6ce0d821d3860df2769fe7c26"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.808256 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"5e59e84292a9d71255dc032eabddbef57abad1556c7e7c531de1f3f0a946195a"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.812335 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovnkube-controller/3.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.817609 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovn-acl-logging/0.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.818739 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wb9bn_2b4f740d-a1ca-450f-adad-afb42efe0c76/ovn-controller/0.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819357 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819411 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819454 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819466 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819476 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819486 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" exitCode=0 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819500 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" exitCode=143 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819511 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b4f740d-a1ca-450f-adad-afb42efe0c76" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" exitCode=143 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819469 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819584 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819604 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819635 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819656 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820077 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820174 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820191 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820199 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820728 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820743 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820751 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820759 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820800 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820931 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820948 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820966 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.820998 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821006 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821013 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821020 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.819698 4861 scope.go:117] "RemoveContainer" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821030 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821214 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821246 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821261 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821274 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821309 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821348 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821364 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821376 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821388 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821398 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821408 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821440 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821453 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821463 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821473 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821488 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wb9bn" event={"ID":"2b4f740d-a1ca-450f-adad-afb42efe0c76","Type":"ContainerDied","Data":"60835a65e091c0f93a09823014ee72c1964da4ba412855e5d2bcdd7f35b22871"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821506 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821520 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821531 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821541 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821552 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821563 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821573 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821585 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821616 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.821628 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.823035 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/2.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.823767 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/1.log" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.823853 4861 generic.go:334] "Generic (PLEG): container finished" podID="1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb" containerID="1fb40728a849ae1a3b7a5f7e05b9b5e01c9049dfac4cd91ea977686bd736c79b" exitCode=2 Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.823906 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerDied","Data":"1fb40728a849ae1a3b7a5f7e05b9b5e01c9049dfac4cd91ea977686bd736c79b"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.823944 4861 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a"} Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.824745 4861 scope.go:117] "RemoveContainer" containerID="1fb40728a849ae1a3b7a5f7e05b9b5e01c9049dfac4cd91ea977686bd736c79b" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.936157 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.969169 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wb9bn"] Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.973911 4861 scope.go:117] "RemoveContainer" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.975023 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wb9bn"] Feb 19 13:22:20 crc kubenswrapper[4861]: I0219 13:22:20.999658 4861 scope.go:117] "RemoveContainer" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.016533 4861 scope.go:117] "RemoveContainer" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.031805 4861 scope.go:117] "RemoveContainer" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.057265 4861 scope.go:117] "RemoveContainer" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.071514 4861 scope.go:117] "RemoveContainer" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.090177 4861 scope.go:117] "RemoveContainer" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.112882 4861 scope.go:117] "RemoveContainer" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.140260 4861 scope.go:117] "RemoveContainer" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.140801 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": container with ID starting with 09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861 not found: ID does not exist" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.140861 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} err="failed to get container status \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": rpc error: code = NotFound desc = could not find container \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": container with ID starting with 09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.140894 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.141318 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": container with ID starting with eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512 not found: ID does not exist" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.141347 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} err="failed to get container status \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": rpc error: code = NotFound desc = could not find container \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": container with ID starting with eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.141368 4861 scope.go:117] "RemoveContainer" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.141947 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": container with ID starting with b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61 not found: ID does not exist" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.141965 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} err="failed to get container status \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": rpc error: code = NotFound desc = could not find container \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": container with ID starting with b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.141991 4861 scope.go:117] "RemoveContainer" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.142222 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": container with ID starting with 13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0 not found: ID does not exist" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.142249 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} err="failed to get container status \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": rpc error: code = NotFound desc = could not find container \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": container with ID starting with 13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.142267 4861 scope.go:117] "RemoveContainer" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.142509 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": container with ID starting with 77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517 not found: ID does not exist" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.142532 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} err="failed to get container status \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": rpc error: code = NotFound desc = could not find container \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": container with ID starting with 77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.142558 4861 scope.go:117] "RemoveContainer" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.142944 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": container with ID starting with 7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d not found: ID does not exist" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.142967 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} err="failed to get container status \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": rpc error: code = NotFound desc = could not find container \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": container with ID starting with 7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.142981 4861 scope.go:117] "RemoveContainer" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.143218 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": container with ID starting with 6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e not found: ID does not exist" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.143258 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} err="failed to get container status \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": rpc error: code = NotFound desc = could not find container \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": container with ID starting with 6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.143277 4861 scope.go:117] "RemoveContainer" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.143541 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": container with ID starting with e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be not found: ID does not exist" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.143563 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} err="failed to get container status \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": rpc error: code = NotFound desc = could not find container \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": container with ID starting with e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.143582 4861 scope.go:117] "RemoveContainer" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.144043 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": container with ID starting with 4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85 not found: ID does not exist" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.144091 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} err="failed to get container status \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": rpc error: code = NotFound desc = could not find container \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": container with ID starting with 4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.144110 4861 scope.go:117] "RemoveContainer" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" Feb 19 13:22:21 crc kubenswrapper[4861]: E0219 13:22:21.144452 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": container with ID starting with ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a not found: ID does not exist" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.144472 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} err="failed to get container status \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": rpc error: code = NotFound desc = could not find container \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": container with ID starting with ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.144485 4861 scope.go:117] "RemoveContainer" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.144684 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} err="failed to get container status \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": rpc error: code = NotFound desc = could not find container \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": container with ID starting with 09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.144699 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.145345 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} err="failed to get container status \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": rpc error: code = NotFound desc = could not find container \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": container with ID starting with eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.145360 4861 scope.go:117] "RemoveContainer" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.146286 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} err="failed to get container status \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": rpc error: code = NotFound desc = could not find container \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": container with ID starting with b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.146581 4861 scope.go:117] "RemoveContainer" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.146944 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} err="failed to get container status \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": rpc error: code = NotFound desc = could not find container \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": container with ID starting with 13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.146986 4861 scope.go:117] "RemoveContainer" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.147320 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} err="failed to get container status \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": rpc error: code = NotFound desc = could not find container \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": container with ID starting with 77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.147344 4861 scope.go:117] "RemoveContainer" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.147949 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} err="failed to get container status \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": rpc error: code = NotFound desc = could not find container \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": container with ID starting with 7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.147973 4861 scope.go:117] "RemoveContainer" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.148254 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} err="failed to get container status \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": rpc error: code = NotFound desc = could not find container \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": container with ID starting with 6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.148275 4861 scope.go:117] "RemoveContainer" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.148559 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} err="failed to get container status \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": rpc error: code = NotFound desc = could not find container \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": container with ID starting with e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.148590 4861 scope.go:117] "RemoveContainer" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.148868 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} err="failed to get container status \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": rpc error: code = NotFound desc = could not find container \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": container with ID starting with 4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.148893 4861 scope.go:117] "RemoveContainer" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.149108 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} err="failed to get container status \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": rpc error: code = NotFound desc = could not find container \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": container with ID starting with ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.149132 4861 scope.go:117] "RemoveContainer" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.149362 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} err="failed to get container status \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": rpc error: code = NotFound desc = could not find container \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": container with ID starting with 09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.149380 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.151724 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} err="failed to get container status \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": rpc error: code = NotFound desc = could not find container \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": container with ID starting with eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.151757 4861 scope.go:117] "RemoveContainer" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.152012 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} err="failed to get container status \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": rpc error: code = NotFound desc = could not find container \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": container with ID starting with b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.152036 4861 scope.go:117] "RemoveContainer" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.152370 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} err="failed to get container status \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": rpc error: code = NotFound desc = could not find container \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": container with ID starting with 13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.152400 4861 scope.go:117] "RemoveContainer" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.152824 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} err="failed to get container status \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": rpc error: code = NotFound desc = could not find container \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": container with ID starting with 77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.152850 4861 scope.go:117] "RemoveContainer" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.153085 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} err="failed to get container status \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": rpc error: code = NotFound desc = could not find container \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": container with ID starting with 7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.153107 4861 scope.go:117] "RemoveContainer" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.153568 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} err="failed to get container status \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": rpc error: code = NotFound desc = could not find container \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": container with ID starting with 6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.153588 4861 scope.go:117] "RemoveContainer" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.153876 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} err="failed to get container status \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": rpc error: code = NotFound desc = could not find container \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": container with ID starting with e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.153901 4861 scope.go:117] "RemoveContainer" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.154180 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} err="failed to get container status \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": rpc error: code = NotFound desc = could not find container \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": container with ID starting with 4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.154199 4861 scope.go:117] "RemoveContainer" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.154399 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} err="failed to get container status \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": rpc error: code = NotFound desc = could not find container \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": container with ID starting with ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.154438 4861 scope.go:117] "RemoveContainer" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.155506 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} err="failed to get container status \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": rpc error: code = NotFound desc = could not find container \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": container with ID starting with 09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.155530 4861 scope.go:117] "RemoveContainer" containerID="eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.155773 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512"} err="failed to get container status \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": rpc error: code = NotFound desc = could not find container \"eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512\": container with ID starting with eb5d0a20f493aa7270add5605230a54b42e7b68de170451cef74590e9f5bc512 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.155801 4861 scope.go:117] "RemoveContainer" containerID="b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156213 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61"} err="failed to get container status \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": rpc error: code = NotFound desc = could not find container \"b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61\": container with ID starting with b8891f879a56bc7e95b9ba9101255bfb1d55d4326e37fca90bdbbf058770bf61 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156228 4861 scope.go:117] "RemoveContainer" containerID="13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156474 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0"} err="failed to get container status \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": rpc error: code = NotFound desc = could not find container \"13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0\": container with ID starting with 13c8b558c349f33de89d3bfeb9f5471cc66f3177b5e4950778192f9eac2733c0 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156490 4861 scope.go:117] "RemoveContainer" containerID="77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156694 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517"} err="failed to get container status \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": rpc error: code = NotFound desc = could not find container \"77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517\": container with ID starting with 77cddca96d089553627e891d0a3f0068e94921da8c2cf761f7bdc8005916a517 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156710 4861 scope.go:117] "RemoveContainer" containerID="7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156928 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d"} err="failed to get container status \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": rpc error: code = NotFound desc = could not find container \"7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d\": container with ID starting with 7f8c8807122a8fc4719570f3754b83b95fb26f7a8ed5e019734ba4cf074bd87d not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.156944 4861 scope.go:117] "RemoveContainer" containerID="6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.157286 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e"} err="failed to get container status \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": rpc error: code = NotFound desc = could not find container \"6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e\": container with ID starting with 6a27a3c77dbf0d439058fc9264b4e432ebffb6ce62698469b90fcb3c7b91850e not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.157322 4861 scope.go:117] "RemoveContainer" containerID="e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.157617 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be"} err="failed to get container status \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": rpc error: code = NotFound desc = could not find container \"e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be\": container with ID starting with e6f3f24df8d88cbc91579c0df371db1a5f08426a68f326de419f357aadc447be not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.157655 4861 scope.go:117] "RemoveContainer" containerID="4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.157975 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85"} err="failed to get container status \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": rpc error: code = NotFound desc = could not find container \"4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85\": container with ID starting with 4f250491afb0f1e8face59cda2ddb067e4b86814927af8998d34d9f2994e9e85 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.157994 4861 scope.go:117] "RemoveContainer" containerID="ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.158221 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a"} err="failed to get container status \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": rpc error: code = NotFound desc = could not find container \"ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a\": container with ID starting with ee2fa94c4291811fa58e14e91010eff012a16fb24c4d630f79db565388b7a49a not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.158241 4861 scope.go:117] "RemoveContainer" containerID="09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.158483 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861"} err="failed to get container status \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": rpc error: code = NotFound desc = could not find container \"09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861\": container with ID starting with 09b2142166df739cb0360b744a075b4db426bb6950febd78929928a0b6a8a861 not found: ID does not exist" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.837214 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/2.log" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.838346 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/1.log" Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.838510 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ffskh" event={"ID":"1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb","Type":"ContainerStarted","Data":"cdd613ac55db51e0e6134c068fbdc0177379f823c55f71195f1b5688e1d1c5da"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.844172 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"56788b0ea8e7ef2eea7483dd17e283a7a92e2b35297cb66d552f1161c24af60d"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.844207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"5853e81308d454784f202d852958974fece12aba865e9f2ef1e329af7bbe0118"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.844225 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"ab66610495996c35cbe6de99b6c9ec2dd18181644da93aa76a909a9b0b321e76"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.844238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"2c7ea9480731844f8db1bee9d8aea2bf373e8c7ddec629886c8165ea702e6211"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.844250 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"6ee0f07acb45c7235cd9f93d7687471dfc1ac47110867b5e1e2e716aa6636335"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.844264 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"aa4f0083b7efde46c8d7f848dc12cdbd495ac59713fffdafb85c4b3aa427f8ff"} Feb 19 13:22:21 crc kubenswrapper[4861]: I0219 13:22:21.990940 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4f740d-a1ca-450f-adad-afb42efe0c76" path="/var/lib/kubelet/pods/2b4f740d-a1ca-450f-adad-afb42efe0c76/volumes" Feb 19 13:22:24 crc kubenswrapper[4861]: I0219 13:22:24.872220 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"5b3939eb37381935baa15307165356c1f267cba481486b9fd3471dcae9001747"} Feb 19 13:22:25 crc kubenswrapper[4861]: I0219 13:22:25.719645 4861 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.794430 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kxkfl"] Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.796800 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.799030 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.799236 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.799740 4861 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j7gqd" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.800030 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.887675 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" event={"ID":"3f95f74b-44a5-44ed-9239-0a49c357fee9","Type":"ContainerStarted","Data":"164f86531abae22070b6cb88423033cca6dc45bf80042f72c8c7311f6b8599a9"} Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.889250 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.889359 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.889838 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.912757 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4897ab39-1e1b-4631-be09-9b89a965f415-crc-storage\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.912828 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4897ab39-1e1b-4631-be09-9b89a965f415-node-mnt\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.912924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjc66\" (UniqueName: \"kubernetes.io/projected/4897ab39-1e1b-4631-be09-9b89a965f415-kube-api-access-pjc66\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.916635 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" podStartSLOduration=6.9166115139999995 podStartE2EDuration="6.916611514s" podCreationTimestamp="2026-02-19 13:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:22:26.914880167 +0000 UTC m=+761.575983415" watchObservedRunningTime="2026-02-19 13:22:26.916611514 +0000 UTC m=+761.577714762" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.919856 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:26 crc kubenswrapper[4861]: I0219 13:22:26.921230 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.014524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4897ab39-1e1b-4631-be09-9b89a965f415-crc-storage\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.014896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4897ab39-1e1b-4631-be09-9b89a965f415-node-mnt\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.015021 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjc66\" (UniqueName: \"kubernetes.io/projected/4897ab39-1e1b-4631-be09-9b89a965f415-kube-api-access-pjc66\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.016193 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4897ab39-1e1b-4631-be09-9b89a965f415-crc-storage\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.016572 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4897ab39-1e1b-4631-be09-9b89a965f415-node-mnt\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.036649 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjc66\" (UniqueName: \"kubernetes.io/projected/4897ab39-1e1b-4631-be09-9b89a965f415-kube-api-access-pjc66\") pod \"crc-storage-crc-kxkfl\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.046341 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kxkfl"] Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.114542 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.142675 4861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(c71f3f02ea658cdacc052a43fc6db505c8ed110b01c6456f756672e4da999b59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.142769 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(c71f3f02ea658cdacc052a43fc6db505c8ed110b01c6456f756672e4da999b59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.142802 4861 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(c71f3f02ea658cdacc052a43fc6db505c8ed110b01c6456f756672e4da999b59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.142863 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kxkfl_crc-storage(4897ab39-1e1b-4631-be09-9b89a965f415)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kxkfl_crc-storage(4897ab39-1e1b-4631-be09-9b89a965f415)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(c71f3f02ea658cdacc052a43fc6db505c8ed110b01c6456f756672e4da999b59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kxkfl" podUID="4897ab39-1e1b-4631-be09-9b89a965f415" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.893716 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: I0219 13:22:27.894876 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.927522 4861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(f19235d885f6974b10cadba2702d55c1f291d29ab9b34a1711e057b580ce5b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.927619 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(f19235d885f6974b10cadba2702d55c1f291d29ab9b34a1711e057b580ce5b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.927658 4861 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(f19235d885f6974b10cadba2702d55c1f291d29ab9b34a1711e057b580ce5b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:27 crc kubenswrapper[4861]: E0219 13:22:27.927739 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kxkfl_crc-storage(4897ab39-1e1b-4631-be09-9b89a965f415)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kxkfl_crc-storage(4897ab39-1e1b-4631-be09-9b89a965f415)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kxkfl_crc-storage_4897ab39-1e1b-4631-be09-9b89a965f415_0(f19235d885f6974b10cadba2702d55c1f291d29ab9b34a1711e057b580ce5b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kxkfl" podUID="4897ab39-1e1b-4631-be09-9b89a965f415" Feb 19 13:22:40 crc kubenswrapper[4861]: I0219 13:22:40.976669 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:40 crc kubenswrapper[4861]: I0219 13:22:40.978177 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:41 crc kubenswrapper[4861]: I0219 13:22:41.279646 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kxkfl"] Feb 19 13:22:41 crc kubenswrapper[4861]: W0219 13:22:41.291852 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4897ab39_1e1b_4631_be09_9b89a965f415.slice/crio-2383f1cc08f157062bbe8b1228998cd00369c777b68f8a8aea1bfd0cc4e60a4e WatchSource:0}: Error finding container 2383f1cc08f157062bbe8b1228998cd00369c777b68f8a8aea1bfd0cc4e60a4e: Status 404 returned error can't find the container with id 2383f1cc08f157062bbe8b1228998cd00369c777b68f8a8aea1bfd0cc4e60a4e Feb 19 13:22:41 crc kubenswrapper[4861]: I0219 13:22:41.294611 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:22:42 crc kubenswrapper[4861]: I0219 13:22:42.040557 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kxkfl" event={"ID":"4897ab39-1e1b-4631-be09-9b89a965f415","Type":"ContainerStarted","Data":"2383f1cc08f157062bbe8b1228998cd00369c777b68f8a8aea1bfd0cc4e60a4e"} Feb 19 13:22:43 crc kubenswrapper[4861]: I0219 13:22:43.052034 4861 generic.go:334] "Generic (PLEG): container finished" podID="4897ab39-1e1b-4631-be09-9b89a965f415" containerID="2ffc7204b00d9bbf31a828b02efa1f14297b7b40801674c5cd5316926e17ce36" exitCode=0 Feb 19 13:22:43 crc kubenswrapper[4861]: I0219 13:22:43.052179 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kxkfl" event={"ID":"4897ab39-1e1b-4631-be09-9b89a965f415","Type":"ContainerDied","Data":"2ffc7204b00d9bbf31a828b02efa1f14297b7b40801674c5cd5316926e17ce36"} Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.340853 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.372122 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4897ab39-1e1b-4631-be09-9b89a965f415-crc-storage\") pod \"4897ab39-1e1b-4631-be09-9b89a965f415\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.372264 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjc66\" (UniqueName: \"kubernetes.io/projected/4897ab39-1e1b-4631-be09-9b89a965f415-kube-api-access-pjc66\") pod \"4897ab39-1e1b-4631-be09-9b89a965f415\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.372577 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4897ab39-1e1b-4631-be09-9b89a965f415-node-mnt\") pod \"4897ab39-1e1b-4631-be09-9b89a965f415\" (UID: \"4897ab39-1e1b-4631-be09-9b89a965f415\") " Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.372720 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4897ab39-1e1b-4631-be09-9b89a965f415-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4897ab39-1e1b-4631-be09-9b89a965f415" (UID: "4897ab39-1e1b-4631-be09-9b89a965f415"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.373273 4861 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4897ab39-1e1b-4631-be09-9b89a965f415-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.378228 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4897ab39-1e1b-4631-be09-9b89a965f415-kube-api-access-pjc66" (OuterVolumeSpecName: "kube-api-access-pjc66") pod "4897ab39-1e1b-4631-be09-9b89a965f415" (UID: "4897ab39-1e1b-4631-be09-9b89a965f415"). InnerVolumeSpecName "kube-api-access-pjc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.395705 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4897ab39-1e1b-4631-be09-9b89a965f415-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4897ab39-1e1b-4631-be09-9b89a965f415" (UID: "4897ab39-1e1b-4631-be09-9b89a965f415"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.474850 4861 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4897ab39-1e1b-4631-be09-9b89a965f415-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:44 crc kubenswrapper[4861]: I0219 13:22:44.474979 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjc66\" (UniqueName: \"kubernetes.io/projected/4897ab39-1e1b-4631-be09-9b89a965f415-kube-api-access-pjc66\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:45 crc kubenswrapper[4861]: I0219 13:22:45.068703 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kxkfl" event={"ID":"4897ab39-1e1b-4631-be09-9b89a965f415","Type":"ContainerDied","Data":"2383f1cc08f157062bbe8b1228998cd00369c777b68f8a8aea1bfd0cc4e60a4e"} Feb 19 13:22:45 crc kubenswrapper[4861]: I0219 13:22:45.068761 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2383f1cc08f157062bbe8b1228998cd00369c777b68f8a8aea1bfd0cc4e60a4e" Feb 19 13:22:45 crc kubenswrapper[4861]: I0219 13:22:45.068818 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kxkfl" Feb 19 13:22:50 crc kubenswrapper[4861]: I0219 13:22:50.507841 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kfk79" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.789413 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7"] Feb 19 13:22:52 crc kubenswrapper[4861]: E0219 13:22:52.789707 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4897ab39-1e1b-4631-be09-9b89a965f415" containerName="storage" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.789725 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4897ab39-1e1b-4631-be09-9b89a965f415" containerName="storage" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.789865 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4897ab39-1e1b-4631-be09-9b89a965f415" containerName="storage" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.790972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.793957 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.809315 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7"] Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.893592 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.893690 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.893740 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjndh\" (UniqueName: \"kubernetes.io/projected/a45ed361-a230-497f-8a42-60720cbb330b-kube-api-access-gjndh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.995115 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.995189 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.995223 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjndh\" (UniqueName: \"kubernetes.io/projected/a45ed361-a230-497f-8a42-60720cbb330b-kube-api-access-gjndh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.995866 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:52 crc kubenswrapper[4861]: I0219 13:22:52.995890 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:53 crc kubenswrapper[4861]: I0219 13:22:53.016994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjndh\" (UniqueName: \"kubernetes.io/projected/a45ed361-a230-497f-8a42-60720cbb330b-kube-api-access-gjndh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:53 crc kubenswrapper[4861]: I0219 13:22:53.150012 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:53 crc kubenswrapper[4861]: I0219 13:22:53.400247 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7"] Feb 19 13:22:53 crc kubenswrapper[4861]: I0219 13:22:53.635063 4861 scope.go:117] "RemoveContainer" containerID="4cfa2d69da9a7c9b0d96ca3c642a204c152a426e9aba16467134005fe25c907a" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.144699 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ffskh_1bcbc137-1ece-4a28-b8f8-5e2f6aa1c3bb/kube-multus/2.log" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.150139 4861 generic.go:334] "Generic (PLEG): container finished" podID="a45ed361-a230-497f-8a42-60720cbb330b" containerID="067c2cb1130616a4432b79a2d308d44defe0501afc92c8611cc5a2d30db440c1" exitCode=0 Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.150303 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" event={"ID":"a45ed361-a230-497f-8a42-60720cbb330b","Type":"ContainerDied","Data":"067c2cb1130616a4432b79a2d308d44defe0501afc92c8611cc5a2d30db440c1"} Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.150493 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" event={"ID":"a45ed361-a230-497f-8a42-60720cbb330b","Type":"ContainerStarted","Data":"a980c01ef13c27a45837768582c986e52f1d5257315118dfa93ce936d02e3a92"} Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.770467 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-slc5h"] Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.772637 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.786492 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-slc5h"] Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.818308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc8k\" (UniqueName: \"kubernetes.io/projected/5d051557-63da-4285-a7fa-2a6e9eb626e3-kube-api-access-chc8k\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.818406 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-utilities\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.818544 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-catalog-content\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.919524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chc8k\" (UniqueName: \"kubernetes.io/projected/5d051557-63da-4285-a7fa-2a6e9eb626e3-kube-api-access-chc8k\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.919642 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-utilities\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.919720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-catalog-content\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.920516 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-utilities\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.920639 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-catalog-content\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:54 crc kubenswrapper[4861]: I0219 13:22:54.951578 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chc8k\" (UniqueName: \"kubernetes.io/projected/5d051557-63da-4285-a7fa-2a6e9eb626e3-kube-api-access-chc8k\") pod \"redhat-operators-slc5h\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:55 crc kubenswrapper[4861]: I0219 13:22:55.116285 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:22:55 crc kubenswrapper[4861]: I0219 13:22:55.418192 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-slc5h"] Feb 19 13:22:56 crc kubenswrapper[4861]: I0219 13:22:56.167276 4861 generic.go:334] "Generic (PLEG): container finished" podID="a45ed361-a230-497f-8a42-60720cbb330b" containerID="7663c94d4b8a637dba7d6aaef4a90a9da414a8d635d2558c938d70cc0a80caa1" exitCode=0 Feb 19 13:22:56 crc kubenswrapper[4861]: I0219 13:22:56.167381 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" event={"ID":"a45ed361-a230-497f-8a42-60720cbb330b","Type":"ContainerDied","Data":"7663c94d4b8a637dba7d6aaef4a90a9da414a8d635d2558c938d70cc0a80caa1"} Feb 19 13:22:56 crc kubenswrapper[4861]: I0219 13:22:56.169455 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerID="1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97" exitCode=0 Feb 19 13:22:56 crc kubenswrapper[4861]: I0219 13:22:56.169501 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerDied","Data":"1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97"} Feb 19 13:22:56 crc kubenswrapper[4861]: I0219 13:22:56.169530 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerStarted","Data":"2546819c55e793e62a9cb7611fe49598317bcab23c425039386c57db216d65b6"} Feb 19 13:22:57 crc kubenswrapper[4861]: I0219 13:22:57.179573 4861 generic.go:334] "Generic (PLEG): container finished" podID="a45ed361-a230-497f-8a42-60720cbb330b" containerID="9bc867d0bca07eefc635b6800446b8297150b6f6fc19057f26316c81d172393d" exitCode=0 Feb 19 13:22:57 crc kubenswrapper[4861]: I0219 13:22:57.179720 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" event={"ID":"a45ed361-a230-497f-8a42-60720cbb330b","Type":"ContainerDied","Data":"9bc867d0bca07eefc635b6800446b8297150b6f6fc19057f26316c81d172393d"} Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.191299 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerStarted","Data":"fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d"} Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.601064 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.678133 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-util\") pod \"a45ed361-a230-497f-8a42-60720cbb330b\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.678221 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-bundle\") pod \"a45ed361-a230-497f-8a42-60720cbb330b\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.678297 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjndh\" (UniqueName: \"kubernetes.io/projected/a45ed361-a230-497f-8a42-60720cbb330b-kube-api-access-gjndh\") pod \"a45ed361-a230-497f-8a42-60720cbb330b\" (UID: \"a45ed361-a230-497f-8a42-60720cbb330b\") " Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.679539 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-bundle" (OuterVolumeSpecName: "bundle") pod "a45ed361-a230-497f-8a42-60720cbb330b" (UID: "a45ed361-a230-497f-8a42-60720cbb330b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.688688 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45ed361-a230-497f-8a42-60720cbb330b-kube-api-access-gjndh" (OuterVolumeSpecName: "kube-api-access-gjndh") pod "a45ed361-a230-497f-8a42-60720cbb330b" (UID: "a45ed361-a230-497f-8a42-60720cbb330b"). InnerVolumeSpecName "kube-api-access-gjndh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.697266 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-util" (OuterVolumeSpecName: "util") pod "a45ed361-a230-497f-8a42-60720cbb330b" (UID: "a45ed361-a230-497f-8a42-60720cbb330b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.779894 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-util\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.779932 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45ed361-a230-497f-8a42-60720cbb330b-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:58 crc kubenswrapper[4861]: I0219 13:22:58.779947 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjndh\" (UniqueName: \"kubernetes.io/projected/a45ed361-a230-497f-8a42-60720cbb330b-kube-api-access-gjndh\") on node \"crc\" DevicePath \"\"" Feb 19 13:22:59 crc kubenswrapper[4861]: I0219 13:22:59.207984 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" event={"ID":"a45ed361-a230-497f-8a42-60720cbb330b","Type":"ContainerDied","Data":"a980c01ef13c27a45837768582c986e52f1d5257315118dfa93ce936d02e3a92"} Feb 19 13:22:59 crc kubenswrapper[4861]: I0219 13:22:59.208531 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a980c01ef13c27a45837768582c986e52f1d5257315118dfa93ce936d02e3a92" Feb 19 13:22:59 crc kubenswrapper[4861]: I0219 13:22:59.208156 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7" Feb 19 13:22:59 crc kubenswrapper[4861]: I0219 13:22:59.211922 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerID="fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d" exitCode=0 Feb 19 13:22:59 crc kubenswrapper[4861]: I0219 13:22:59.211997 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerDied","Data":"fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d"} Feb 19 13:23:00 crc kubenswrapper[4861]: I0219 13:23:00.226293 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerStarted","Data":"2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74"} Feb 19 13:23:00 crc kubenswrapper[4861]: I0219 13:23:00.259784 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-slc5h" podStartSLOduration=2.628507109 podStartE2EDuration="6.259751587s" podCreationTimestamp="2026-02-19 13:22:54 +0000 UTC" firstStartedPulling="2026-02-19 13:22:56.171618358 +0000 UTC m=+790.832721616" lastFinishedPulling="2026-02-19 13:22:59.802862826 +0000 UTC m=+794.463966094" observedRunningTime="2026-02-19 13:23:00.254988192 +0000 UTC m=+794.916091460" watchObservedRunningTime="2026-02-19 13:23:00.259751587 +0000 UTC m=+794.920854865" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.128403 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-gdfgs"] Feb 19 13:23:03 crc kubenswrapper[4861]: E0219 13:23:03.128998 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="util" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.129016 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="util" Feb 19 13:23:03 crc kubenswrapper[4861]: E0219 13:23:03.129034 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="extract" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.129042 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="extract" Feb 19 13:23:03 crc kubenswrapper[4861]: E0219 13:23:03.129058 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="pull" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.129067 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="pull" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.129183 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45ed361-a230-497f-8a42-60720cbb330b" containerName="extract" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.129668 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.136226 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.136318 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.136985 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sr796" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.157276 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-gdfgs"] Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.245507 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjr9t\" (UniqueName: \"kubernetes.io/projected/722cdce9-f694-4bb3-ac19-7bc8ab5a34a7-kube-api-access-gjr9t\") pod \"nmstate-operator-694c9596b7-gdfgs\" (UID: \"722cdce9-f694-4bb3-ac19-7bc8ab5a34a7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.347284 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjr9t\" (UniqueName: \"kubernetes.io/projected/722cdce9-f694-4bb3-ac19-7bc8ab5a34a7-kube-api-access-gjr9t\") pod \"nmstate-operator-694c9596b7-gdfgs\" (UID: \"722cdce9-f694-4bb3-ac19-7bc8ab5a34a7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.370913 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjr9t\" (UniqueName: \"kubernetes.io/projected/722cdce9-f694-4bb3-ac19-7bc8ab5a34a7-kube-api-access-gjr9t\") pod \"nmstate-operator-694c9596b7-gdfgs\" (UID: \"722cdce9-f694-4bb3-ac19-7bc8ab5a34a7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.448314 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.834819 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.835280 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:23:03 crc kubenswrapper[4861]: I0219 13:23:03.905117 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-gdfgs"] Feb 19 13:23:04 crc kubenswrapper[4861]: I0219 13:23:04.264317 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" event={"ID":"722cdce9-f694-4bb3-ac19-7bc8ab5a34a7","Type":"ContainerStarted","Data":"533d6edf130954257fd72f77178f9381eb79deeba70c18f296ac71d1b12d6cab"} Feb 19 13:23:05 crc kubenswrapper[4861]: I0219 13:23:05.121234 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:23:05 crc kubenswrapper[4861]: I0219 13:23:05.121333 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:23:06 crc kubenswrapper[4861]: I0219 13:23:06.186880 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-slc5h" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="registry-server" probeResult="failure" output=< Feb 19 13:23:06 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 13:23:06 crc kubenswrapper[4861]: > Feb 19 13:23:07 crc kubenswrapper[4861]: I0219 13:23:07.289026 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" event={"ID":"722cdce9-f694-4bb3-ac19-7bc8ab5a34a7","Type":"ContainerStarted","Data":"85ac54c4fc0b9edc393a4a0d1511cd7570f52682d8cc2144d18e5881d5c5fc2b"} Feb 19 13:23:07 crc kubenswrapper[4861]: I0219 13:23:07.320928 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-gdfgs" podStartSLOduration=2.132159252 podStartE2EDuration="4.320907985s" podCreationTimestamp="2026-02-19 13:23:03 +0000 UTC" firstStartedPulling="2026-02-19 13:23:03.915180146 +0000 UTC m=+798.576283374" lastFinishedPulling="2026-02-19 13:23:06.103928869 +0000 UTC m=+800.765032107" observedRunningTime="2026-02-19 13:23:07.318517236 +0000 UTC m=+801.979620555" watchObservedRunningTime="2026-02-19 13:23:07.320907985 +0000 UTC m=+801.982011223" Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.876173 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b"] Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.878595 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.885044 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nrxsk" Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.900612 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8qk\" (UniqueName: \"kubernetes.io/projected/e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c-kube-api-access-tx8qk\") pod \"nmstate-metrics-58c85c668d-c9q9b\" (UID: \"e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.901528 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7"] Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.903041 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.907243 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b"] Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.925971 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.936552 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7"] Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.941697 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-km6kf"] Feb 19 13:23:12 crc kubenswrapper[4861]: I0219 13:23:12.942615 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-ovs-socket\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002152 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-dbus-socket\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002188 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8qk\" (UniqueName: \"kubernetes.io/projected/e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c-kube-api-access-tx8qk\") pod \"nmstate-metrics-58c85c668d-c9q9b\" (UID: \"e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002213 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-nmstate-lock\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002259 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt97v\" (UniqueName: \"kubernetes.io/projected/3a773c5d-b21b-4a8b-b1af-16c2258201d3-kube-api-access-mt97v\") pod \"nmstate-webhook-866bcb46dc-rz7x7\" (UID: \"3a773c5d-b21b-4a8b-b1af-16c2258201d3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002283 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsklj\" (UniqueName: \"kubernetes.io/projected/ead34d08-9b3b-4500-b146-907e75d3ae4c-kube-api-access-hsklj\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.002306 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a773c5d-b21b-4a8b-b1af-16c2258201d3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-rz7x7\" (UID: \"3a773c5d-b21b-4a8b-b1af-16c2258201d3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.023055 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8qk\" (UniqueName: \"kubernetes.io/projected/e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c-kube-api-access-tx8qk\") pod \"nmstate-metrics-58c85c668d-c9q9b\" (UID: \"e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.054550 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn"] Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.055826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.058021 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.058282 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.058721 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-96hqg" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.099854 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn"] Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103496 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-nmstate-lock\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103573 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt97v\" (UniqueName: \"kubernetes.io/projected/3a773c5d-b21b-4a8b-b1af-16c2258201d3-kube-api-access-mt97v\") pod \"nmstate-webhook-866bcb46dc-rz7x7\" (UID: \"3a773c5d-b21b-4a8b-b1af-16c2258201d3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103602 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsklj\" (UniqueName: \"kubernetes.io/projected/ead34d08-9b3b-4500-b146-907e75d3ae4c-kube-api-access-hsklj\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a773c5d-b21b-4a8b-b1af-16c2258201d3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-rz7x7\" (UID: \"3a773c5d-b21b-4a8b-b1af-16c2258201d3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103672 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-ovs-socket\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103703 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-dbus-socket\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.103961 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-dbus-socket\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.104012 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-nmstate-lock\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.104042 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ead34d08-9b3b-4500-b146-907e75d3ae4c-ovs-socket\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.114300 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a773c5d-b21b-4a8b-b1af-16c2258201d3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-rz7x7\" (UID: \"3a773c5d-b21b-4a8b-b1af-16c2258201d3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.126287 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt97v\" (UniqueName: \"kubernetes.io/projected/3a773c5d-b21b-4a8b-b1af-16c2258201d3-kube-api-access-mt97v\") pod \"nmstate-webhook-866bcb46dc-rz7x7\" (UID: \"3a773c5d-b21b-4a8b-b1af-16c2258201d3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.132948 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsklj\" (UniqueName: \"kubernetes.io/projected/ead34d08-9b3b-4500-b146-907e75d3ae4c-kube-api-access-hsklj\") pod \"nmstate-handler-km6kf\" (UID: \"ead34d08-9b3b-4500-b146-907e75d3ae4c\") " pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.205693 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-978n2\" (UniqueName: \"kubernetes.io/projected/ebc43836-51c1-432a-89b7-a11307e4e246-kube-api-access-978n2\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.206040 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.206180 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebc43836-51c1-432a-89b7-a11307e4e246-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.234797 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.242893 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.260304 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.306637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.306715 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebc43836-51c1-432a-89b7-a11307e4e246-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.306750 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-978n2\" (UniqueName: \"kubernetes.io/projected/ebc43836-51c1-432a-89b7-a11307e4e246-kube-api-access-978n2\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: E0219 13:23:13.307123 4861 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.307966 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebc43836-51c1-432a-89b7-a11307e4e246-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: E0219 13:23:13.309317 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert podName:ebc43836-51c1-432a-89b7-a11307e4e246 nodeName:}" failed. No retries permitted until 2026-02-19 13:23:13.807213123 +0000 UTC m=+808.468316351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-dpxkn" (UID: "ebc43836-51c1-432a-89b7-a11307e4e246") : secret "plugin-serving-cert" not found Feb 19 13:23:13 crc kubenswrapper[4861]: W0219 13:23:13.316053 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podead34d08_9b3b_4500_b146_907e75d3ae4c.slice/crio-43307f78ba74c91cb845e838a33a341015eadff2272f8d83640b016a497593a4 WatchSource:0}: Error finding container 43307f78ba74c91cb845e838a33a341015eadff2272f8d83640b016a497593a4: Status 404 returned error can't find the container with id 43307f78ba74c91cb845e838a33a341015eadff2272f8d83640b016a497593a4 Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.328919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-978n2\" (UniqueName: \"kubernetes.io/projected/ebc43836-51c1-432a-89b7-a11307e4e246-kube-api-access-978n2\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.336203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-km6kf" event={"ID":"ead34d08-9b3b-4500-b146-907e75d3ae4c","Type":"ContainerStarted","Data":"43307f78ba74c91cb845e838a33a341015eadff2272f8d83640b016a497593a4"} Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.363289 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8f8f988d8-frf2q"] Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.365306 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.395937 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8f8f988d8-frf2q"] Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510218 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-service-ca\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-trusted-ca-bundle\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510318 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-oauth-serving-cert\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-config\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-oauth-config\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510487 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-serving-cert\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.510503 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnw24\" (UniqueName: \"kubernetes.io/projected/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-kube-api-access-bnw24\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.583568 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b"] Feb 19 13:23:13 crc kubenswrapper[4861]: W0219 13:23:13.589781 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4ae8c99_a3d9_40f2_9c52_67ca6ff8ec9c.slice/crio-71a620cc335b8d243f43e124b6944b7b0beadd42b1fccb07abcf0650e2a5a5ac WatchSource:0}: Error finding container 71a620cc335b8d243f43e124b6944b7b0beadd42b1fccb07abcf0650e2a5a5ac: Status 404 returned error can't find the container with id 71a620cc335b8d243f43e124b6944b7b0beadd42b1fccb07abcf0650e2a5a5ac Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.611698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-config\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.611771 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-oauth-config\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.612973 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-serving-cert\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.612686 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-config\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.613034 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnw24\" (UniqueName: \"kubernetes.io/projected/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-kube-api-access-bnw24\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.613221 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-service-ca\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.613255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-trusted-ca-bundle\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.613302 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-oauth-serving-cert\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.613827 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-service-ca\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.614263 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-oauth-serving-cert\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.615680 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-trusted-ca-bundle\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.616615 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-oauth-config\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.617952 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-console-serving-cert\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.629496 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnw24\" (UniqueName: \"kubernetes.io/projected/06335bd1-ba91-4bf1-8e57-0d28bd7a7677-kube-api-access-bnw24\") pod \"console-8f8f988d8-frf2q\" (UID: \"06335bd1-ba91-4bf1-8e57-0d28bd7a7677\") " pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.724644 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.778183 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7"] Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.816953 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:13 crc kubenswrapper[4861]: E0219 13:23:13.817171 4861 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 19 13:23:13 crc kubenswrapper[4861]: E0219 13:23:13.817271 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert podName:ebc43836-51c1-432a-89b7-a11307e4e246 nodeName:}" failed. No retries permitted until 2026-02-19 13:23:14.817243219 +0000 UTC m=+809.478346477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-dpxkn" (UID: "ebc43836-51c1-432a-89b7-a11307e4e246") : secret "plugin-serving-cert" not found Feb 19 13:23:13 crc kubenswrapper[4861]: I0219 13:23:13.973905 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8f8f988d8-frf2q"] Feb 19 13:23:13 crc kubenswrapper[4861]: W0219 13:23:13.986449 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06335bd1_ba91_4bf1_8e57_0d28bd7a7677.slice/crio-b5f6315557f87a0eec6a4679210045dbe7cacc3829a0130f1f05e1d6196fde76 WatchSource:0}: Error finding container b5f6315557f87a0eec6a4679210045dbe7cacc3829a0130f1f05e1d6196fde76: Status 404 returned error can't find the container with id b5f6315557f87a0eec6a4679210045dbe7cacc3829a0130f1f05e1d6196fde76 Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.346745 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" event={"ID":"3a773c5d-b21b-4a8b-b1af-16c2258201d3","Type":"ContainerStarted","Data":"5c35a40714a421f2680920d3a782122b5575889822c82eb3d2eb394f311a0624"} Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.350912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8f8f988d8-frf2q" event={"ID":"06335bd1-ba91-4bf1-8e57-0d28bd7a7677","Type":"ContainerStarted","Data":"b493e1eca1eb21f3476a4c23593d7108c30d1499faf64b645cf00bd7b79a335f"} Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.351040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8f8f988d8-frf2q" event={"ID":"06335bd1-ba91-4bf1-8e57-0d28bd7a7677","Type":"ContainerStarted","Data":"b5f6315557f87a0eec6a4679210045dbe7cacc3829a0130f1f05e1d6196fde76"} Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.353455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" event={"ID":"e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c","Type":"ContainerStarted","Data":"71a620cc335b8d243f43e124b6944b7b0beadd42b1fccb07abcf0650e2a5a5ac"} Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.381811 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8f8f988d8-frf2q" podStartSLOduration=1.381781996 podStartE2EDuration="1.381781996s" podCreationTimestamp="2026-02-19 13:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:23:14.377754317 +0000 UTC m=+809.038857555" watchObservedRunningTime="2026-02-19 13:23:14.381781996 +0000 UTC m=+809.042885234" Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.841759 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.850346 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc43836-51c1-432a-89b7-a11307e4e246-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dpxkn\" (UID: \"ebc43836-51c1-432a-89b7-a11307e4e246\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:14 crc kubenswrapper[4861]: I0219 13:23:14.871156 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" Feb 19 13:23:15 crc kubenswrapper[4861]: I0219 13:23:15.117146 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn"] Feb 19 13:23:15 crc kubenswrapper[4861]: I0219 13:23:15.159711 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:23:15 crc kubenswrapper[4861]: I0219 13:23:15.205942 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:23:15 crc kubenswrapper[4861]: I0219 13:23:15.404109 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-slc5h"] Feb 19 13:23:15 crc kubenswrapper[4861]: W0219 13:23:15.827896 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebc43836_51c1_432a_89b7_a11307e4e246.slice/crio-a65e4a66cfca64a365c61a5d5cfa09d4ca57307f68a11fa769ca49d50d2f620f WatchSource:0}: Error finding container a65e4a66cfca64a365c61a5d5cfa09d4ca57307f68a11fa769ca49d50d2f620f: Status 404 returned error can't find the container with id a65e4a66cfca64a365c61a5d5cfa09d4ca57307f68a11fa769ca49d50d2f620f Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.372546 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-km6kf" event={"ID":"ead34d08-9b3b-4500-b146-907e75d3ae4c","Type":"ContainerStarted","Data":"c0568a8facfbc7fec9f1c0b259bf208081dde7373852de5809c4ece437ada21c"} Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.373078 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.375378 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" event={"ID":"3a773c5d-b21b-4a8b-b1af-16c2258201d3","Type":"ContainerStarted","Data":"7ecbc89bfd8dc4ec6c996cb5d72d2c9d92a949cc974ef3a36cf263836ef14cf4"} Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.376050 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.380783 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" event={"ID":"e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c","Type":"ContainerStarted","Data":"9a04f8714f3fa9880af13e3ff5169af7ce3cf1c4df239b9a5ffbe70e3c2b4d0d"} Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.384079 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-slc5h" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="registry-server" containerID="cri-o://2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74" gracePeriod=2 Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.384077 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" event={"ID":"ebc43836-51c1-432a-89b7-a11307e4e246","Type":"ContainerStarted","Data":"a65e4a66cfca64a365c61a5d5cfa09d4ca57307f68a11fa769ca49d50d2f620f"} Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.394278 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-km6kf" podStartSLOduration=1.722353675 podStartE2EDuration="4.39425869s" podCreationTimestamp="2026-02-19 13:23:12 +0000 UTC" firstStartedPulling="2026-02-19 13:23:13.32392895 +0000 UTC m=+807.985032178" lastFinishedPulling="2026-02-19 13:23:15.995833955 +0000 UTC m=+810.656937193" observedRunningTime="2026-02-19 13:23:16.390389006 +0000 UTC m=+811.051492234" watchObservedRunningTime="2026-02-19 13:23:16.39425869 +0000 UTC m=+811.055361928" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.413480 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" podStartSLOduration=2.190308616 podStartE2EDuration="4.413458238s" podCreationTimestamp="2026-02-19 13:23:12 +0000 UTC" firstStartedPulling="2026-02-19 13:23:13.791084421 +0000 UTC m=+808.452187699" lastFinishedPulling="2026-02-19 13:23:16.014234093 +0000 UTC m=+810.675337321" observedRunningTime="2026-02-19 13:23:16.410895456 +0000 UTC m=+811.071998694" watchObservedRunningTime="2026-02-19 13:23:16.413458238 +0000 UTC m=+811.074561466" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.753466 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.785271 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chc8k\" (UniqueName: \"kubernetes.io/projected/5d051557-63da-4285-a7fa-2a6e9eb626e3-kube-api-access-chc8k\") pod \"5d051557-63da-4285-a7fa-2a6e9eb626e3\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.785314 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-utilities\") pod \"5d051557-63da-4285-a7fa-2a6e9eb626e3\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.785451 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-catalog-content\") pod \"5d051557-63da-4285-a7fa-2a6e9eb626e3\" (UID: \"5d051557-63da-4285-a7fa-2a6e9eb626e3\") " Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.786258 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-utilities" (OuterVolumeSpecName: "utilities") pod "5d051557-63da-4285-a7fa-2a6e9eb626e3" (UID: "5d051557-63da-4285-a7fa-2a6e9eb626e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.798014 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d051557-63da-4285-a7fa-2a6e9eb626e3-kube-api-access-chc8k" (OuterVolumeSpecName: "kube-api-access-chc8k") pod "5d051557-63da-4285-a7fa-2a6e9eb626e3" (UID: "5d051557-63da-4285-a7fa-2a6e9eb626e3"). InnerVolumeSpecName "kube-api-access-chc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.886947 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chc8k\" (UniqueName: \"kubernetes.io/projected/5d051557-63da-4285-a7fa-2a6e9eb626e3-kube-api-access-chc8k\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.886989 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.912082 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d051557-63da-4285-a7fa-2a6e9eb626e3" (UID: "5d051557-63da-4285-a7fa-2a6e9eb626e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:23:16 crc kubenswrapper[4861]: I0219 13:23:16.988501 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d051557-63da-4285-a7fa-2a6e9eb626e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.394047 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerID="2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74" exitCode=0 Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.394212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerDied","Data":"2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74"} Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.395534 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-slc5h" event={"ID":"5d051557-63da-4285-a7fa-2a6e9eb626e3","Type":"ContainerDied","Data":"2546819c55e793e62a9cb7611fe49598317bcab23c425039386c57db216d65b6"} Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.395569 4861 scope.go:117] "RemoveContainer" containerID="2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.394328 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-slc5h" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.430150 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-slc5h"] Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.437024 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-slc5h"] Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.762829 4861 scope.go:117] "RemoveContainer" containerID="fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.782658 4861 scope.go:117] "RemoveContainer" containerID="1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.834143 4861 scope.go:117] "RemoveContainer" containerID="2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74" Feb 19 13:23:17 crc kubenswrapper[4861]: E0219 13:23:17.834665 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74\": container with ID starting with 2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74 not found: ID does not exist" containerID="2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.834703 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74"} err="failed to get container status \"2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74\": rpc error: code = NotFound desc = could not find container \"2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74\": container with ID starting with 2df9d64a12d55bd42a27e556a7a6bf06b44f9ef2598b853fd2fa33075a112d74 not found: ID does not exist" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.834727 4861 scope.go:117] "RemoveContainer" containerID="fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d" Feb 19 13:23:17 crc kubenswrapper[4861]: E0219 13:23:17.835228 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d\": container with ID starting with fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d not found: ID does not exist" containerID="fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.835274 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d"} err="failed to get container status \"fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d\": rpc error: code = NotFound desc = could not find container \"fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d\": container with ID starting with fd56e7866567941bdd3ef7797c468d4b5ded35d06bc055c1a4255c26849f385d not found: ID does not exist" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.835307 4861 scope.go:117] "RemoveContainer" containerID="1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97" Feb 19 13:23:17 crc kubenswrapper[4861]: E0219 13:23:17.835697 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97\": container with ID starting with 1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97 not found: ID does not exist" containerID="1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.835727 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97"} err="failed to get container status \"1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97\": rpc error: code = NotFound desc = could not find container \"1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97\": container with ID starting with 1b599a2948150abc4df6a7341345224bba06f15849bcc1c86a8bf32d806b4c97 not found: ID does not exist" Feb 19 13:23:17 crc kubenswrapper[4861]: I0219 13:23:17.986689 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" path="/var/lib/kubelet/pods/5d051557-63da-4285-a7fa-2a6e9eb626e3/volumes" Feb 19 13:23:18 crc kubenswrapper[4861]: I0219 13:23:18.401883 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" event={"ID":"ebc43836-51c1-432a-89b7-a11307e4e246","Type":"ContainerStarted","Data":"8ec16e8143aa75a4225b217b85bd24e24439afb574d6d1a550bb3e4d70c737ff"} Feb 19 13:23:18 crc kubenswrapper[4861]: I0219 13:23:18.421334 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dpxkn" podStartSLOduration=3.414805631 podStartE2EDuration="5.42131162s" podCreationTimestamp="2026-02-19 13:23:13 +0000 UTC" firstStartedPulling="2026-02-19 13:23:15.832945583 +0000 UTC m=+810.494048821" lastFinishedPulling="2026-02-19 13:23:17.839451582 +0000 UTC m=+812.500554810" observedRunningTime="2026-02-19 13:23:18.418718267 +0000 UTC m=+813.079821495" watchObservedRunningTime="2026-02-19 13:23:18.42131162 +0000 UTC m=+813.082414848" Feb 19 13:23:19 crc kubenswrapper[4861]: I0219 13:23:19.412877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" event={"ID":"e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c","Type":"ContainerStarted","Data":"1766531fd17a6f0d41fa8c217cac5835759323d6fb4ecf293c4d34ed597a7022"} Feb 19 13:23:19 crc kubenswrapper[4861]: I0219 13:23:19.441249 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c9q9b" podStartSLOduration=1.986516449 podStartE2EDuration="7.441214492s" podCreationTimestamp="2026-02-19 13:23:12 +0000 UTC" firstStartedPulling="2026-02-19 13:23:13.591475514 +0000 UTC m=+808.252578742" lastFinishedPulling="2026-02-19 13:23:19.046173517 +0000 UTC m=+813.707276785" observedRunningTime="2026-02-19 13:23:19.440446132 +0000 UTC m=+814.101549400" watchObservedRunningTime="2026-02-19 13:23:19.441214492 +0000 UTC m=+814.102317760" Feb 19 13:23:23 crc kubenswrapper[4861]: I0219 13:23:23.294094 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-km6kf" Feb 19 13:23:23 crc kubenswrapper[4861]: I0219 13:23:23.725374 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:23 crc kubenswrapper[4861]: I0219 13:23:23.725501 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:23 crc kubenswrapper[4861]: I0219 13:23:23.733962 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:24 crc kubenswrapper[4861]: I0219 13:23:24.459704 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8f8f988d8-frf2q" Feb 19 13:23:24 crc kubenswrapper[4861]: I0219 13:23:24.533287 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4bs6h"] Feb 19 13:23:33 crc kubenswrapper[4861]: I0219 13:23:33.252498 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rz7x7" Feb 19 13:23:33 crc kubenswrapper[4861]: I0219 13:23:33.834980 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:23:33 crc kubenswrapper[4861]: I0219 13:23:33.835462 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.643820 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2j29r"] Feb 19 13:23:43 crc kubenswrapper[4861]: E0219 13:23:43.645153 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="extract-content" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.645178 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="extract-content" Feb 19 13:23:43 crc kubenswrapper[4861]: E0219 13:23:43.645220 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="extract-utilities" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.645232 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="extract-utilities" Feb 19 13:23:43 crc kubenswrapper[4861]: E0219 13:23:43.645247 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="registry-server" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.645261 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="registry-server" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.645609 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d051557-63da-4285-a7fa-2a6e9eb626e3" containerName="registry-server" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.647022 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.663763 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j29r"] Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.745568 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-catalog-content\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.745688 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-utilities\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.745724 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbhk\" (UniqueName: \"kubernetes.io/projected/b991de0d-062c-4d05-9038-7330404ab19d-kube-api-access-sbbhk\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.846770 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-utilities\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.847135 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbhk\" (UniqueName: \"kubernetes.io/projected/b991de0d-062c-4d05-9038-7330404ab19d-kube-api-access-sbbhk\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.847204 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-catalog-content\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.847479 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-utilities\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.847608 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-catalog-content\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:43 crc kubenswrapper[4861]: I0219 13:23:43.885183 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbhk\" (UniqueName: \"kubernetes.io/projected/b991de0d-062c-4d05-9038-7330404ab19d-kube-api-access-sbbhk\") pod \"community-operators-2j29r\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:44 crc kubenswrapper[4861]: I0219 13:23:44.005177 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:44 crc kubenswrapper[4861]: I0219 13:23:44.491453 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j29r"] Feb 19 13:23:44 crc kubenswrapper[4861]: I0219 13:23:44.630195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerStarted","Data":"dd84123cbee9b59b21aad2723440caf8a40d519a81192f0eabebe628dd38b3c4"} Feb 19 13:23:45 crc kubenswrapper[4861]: I0219 13:23:45.638570 4861 generic.go:334] "Generic (PLEG): container finished" podID="b991de0d-062c-4d05-9038-7330404ab19d" containerID="ecd26e72b3cfb1533c1d4673082475e138253300292ae24d772331580ccbf2ad" exitCode=0 Feb 19 13:23:45 crc kubenswrapper[4861]: I0219 13:23:45.638694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerDied","Data":"ecd26e72b3cfb1533c1d4673082475e138253300292ae24d772331580ccbf2ad"} Feb 19 13:23:46 crc kubenswrapper[4861]: I0219 13:23:46.650702 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerStarted","Data":"1ec595f29c7a34fdabb406b43eff061d46ae8abcf39543f119704cded5ca9588"} Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.404208 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rb7gv"] Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.409930 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.421520 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb7gv"] Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.506820 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-catalog-content\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.506937 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgw4\" (UniqueName: \"kubernetes.io/projected/dafff9fe-7428-4995-a1a2-acebf52562b1-kube-api-access-fxgw4\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.507008 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-utilities\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.607765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-utilities\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.607836 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-catalog-content\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.607865 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgw4\" (UniqueName: \"kubernetes.io/projected/dafff9fe-7428-4995-a1a2-acebf52562b1-kube-api-access-fxgw4\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.609126 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-catalog-content\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.609330 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-utilities\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.640889 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgw4\" (UniqueName: \"kubernetes.io/projected/dafff9fe-7428-4995-a1a2-acebf52562b1-kube-api-access-fxgw4\") pod \"redhat-marketplace-rb7gv\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.659814 4861 generic.go:334] "Generic (PLEG): container finished" podID="b991de0d-062c-4d05-9038-7330404ab19d" containerID="1ec595f29c7a34fdabb406b43eff061d46ae8abcf39543f119704cded5ca9588" exitCode=0 Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.659866 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerDied","Data":"1ec595f29c7a34fdabb406b43eff061d46ae8abcf39543f119704cded5ca9588"} Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.772148 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:47 crc kubenswrapper[4861]: I0219 13:23:47.997439 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb7gv"] Feb 19 13:23:48 crc kubenswrapper[4861]: I0219 13:23:48.671064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerStarted","Data":"137143c339c6a461e88d56108b063240a10b8855a78dc11356463246008dc0bd"} Feb 19 13:23:48 crc kubenswrapper[4861]: I0219 13:23:48.677368 4861 generic.go:334] "Generic (PLEG): container finished" podID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerID="fb9c18d777344a68489e242f706bb9481a1e76cba99c41add80cf0050be856d5" exitCode=0 Feb 19 13:23:48 crc kubenswrapper[4861]: I0219 13:23:48.677537 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb7gv" event={"ID":"dafff9fe-7428-4995-a1a2-acebf52562b1","Type":"ContainerDied","Data":"fb9c18d777344a68489e242f706bb9481a1e76cba99c41add80cf0050be856d5"} Feb 19 13:23:48 crc kubenswrapper[4861]: I0219 13:23:48.677631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb7gv" event={"ID":"dafff9fe-7428-4995-a1a2-acebf52562b1","Type":"ContainerStarted","Data":"3766bb49f6925b977ef876cd1d1fe02ef60acb75d4aa84cafa12d088a5fac1a6"} Feb 19 13:23:48 crc kubenswrapper[4861]: I0219 13:23:48.697023 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2j29r" podStartSLOduration=3.036122902 podStartE2EDuration="5.69699665s" podCreationTimestamp="2026-02-19 13:23:43 +0000 UTC" firstStartedPulling="2026-02-19 13:23:45.644170281 +0000 UTC m=+840.305273529" lastFinishedPulling="2026-02-19 13:23:48.305044049 +0000 UTC m=+842.966147277" observedRunningTime="2026-02-19 13:23:48.692887668 +0000 UTC m=+843.353990906" watchObservedRunningTime="2026-02-19 13:23:48.69699665 +0000 UTC m=+843.358099888" Feb 19 13:23:49 crc kubenswrapper[4861]: I0219 13:23:49.595987 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4bs6h" podUID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" containerName="console" containerID="cri-o://0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3" gracePeriod=15 Feb 19 13:23:49 crc kubenswrapper[4861]: I0219 13:23:49.686161 4861 generic.go:334] "Generic (PLEG): container finished" podID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerID="99a406e3c6d9d030518b4d1533ce5360882c46445753ac23544fafd4119554ac" exitCode=0 Feb 19 13:23:49 crc kubenswrapper[4861]: I0219 13:23:49.686225 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb7gv" event={"ID":"dafff9fe-7428-4995-a1a2-acebf52562b1","Type":"ContainerDied","Data":"99a406e3c6d9d030518b4d1533ce5360882c46445753ac23544fafd4119554ac"} Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.002829 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bs6h_e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b/console/0.log" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.002895 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.045954 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-serving-cert\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.046006 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-oauth-config\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.046050 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c59z2\" (UniqueName: \"kubernetes.io/projected/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-kube-api-access-c59z2\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.046081 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-service-ca\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.046142 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-trusted-ca-bundle\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.046159 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-config\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.046175 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-oauth-serving-cert\") pod \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\" (UID: \"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b\") " Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.047062 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.048181 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-service-ca" (OuterVolumeSpecName: "service-ca") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.048308 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.048398 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-config" (OuterVolumeSpecName: "console-config") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.054011 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.054016 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-kube-api-access-c59z2" (OuterVolumeSpecName: "kube-api-access-c59z2") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "kube-api-access-c59z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.054274 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" (UID: "e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147707 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c59z2\" (UniqueName: \"kubernetes.io/projected/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-kube-api-access-c59z2\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147772 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147792 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147810 4861 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147826 4861 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147842 4861 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.147859 4861 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.696790 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bs6h_e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b/console/0.log" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.696857 4861 generic.go:334] "Generic (PLEG): container finished" podID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" containerID="0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3" exitCode=2 Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.696895 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bs6h" event={"ID":"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b","Type":"ContainerDied","Data":"0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3"} Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.696927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bs6h" event={"ID":"e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b","Type":"ContainerDied","Data":"2a8432bb8a9eadcdce87c07656bc18e3212bf2c2a39484babde5f35eeec1e761"} Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.696952 4861 scope.go:117] "RemoveContainer" containerID="0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.696982 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bs6h" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.731083 4861 scope.go:117] "RemoveContainer" containerID="0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3" Feb 19 13:23:50 crc kubenswrapper[4861]: E0219 13:23:50.731808 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3\": container with ID starting with 0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3 not found: ID does not exist" containerID="0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.731851 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3"} err="failed to get container status \"0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3\": rpc error: code = NotFound desc = could not find container \"0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3\": container with ID starting with 0e45e3b22474265ff0ea5a36b8ba0e63e781274181c64731fa4890378da95ec3 not found: ID does not exist" Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.741330 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4bs6h"] Feb 19 13:23:50 crc kubenswrapper[4861]: I0219 13:23:50.745699 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4bs6h"] Feb 19 13:23:51 crc kubenswrapper[4861]: I0219 13:23:51.733196 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb7gv" event={"ID":"dafff9fe-7428-4995-a1a2-acebf52562b1","Type":"ContainerStarted","Data":"a0bc47ad3871dfc8e060c49ebbdea8930bb554c673b0f03de57bb17468ad4749"} Feb 19 13:23:51 crc kubenswrapper[4861]: I0219 13:23:51.762274 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rb7gv" podStartSLOduration=2.533391789 podStartE2EDuration="4.762247317s" podCreationTimestamp="2026-02-19 13:23:47 +0000 UTC" firstStartedPulling="2026-02-19 13:23:48.67981184 +0000 UTC m=+843.340915078" lastFinishedPulling="2026-02-19 13:23:50.908667338 +0000 UTC m=+845.569770606" observedRunningTime="2026-02-19 13:23:51.752057017 +0000 UTC m=+846.413160315" watchObservedRunningTime="2026-02-19 13:23:51.762247317 +0000 UTC m=+846.423350585" Feb 19 13:23:51 crc kubenswrapper[4861]: I0219 13:23:51.990207 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" path="/var/lib/kubelet/pods/e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b/volumes" Feb 19 13:23:54 crc kubenswrapper[4861]: I0219 13:23:54.005485 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:54 crc kubenswrapper[4861]: I0219 13:23:54.005575 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:54 crc kubenswrapper[4861]: I0219 13:23:54.069236 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:54 crc kubenswrapper[4861]: I0219 13:23:54.827384 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:56 crc kubenswrapper[4861]: I0219 13:23:56.797078 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j29r"] Feb 19 13:23:56 crc kubenswrapper[4861]: I0219 13:23:56.797314 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2j29r" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="registry-server" containerID="cri-o://137143c339c6a461e88d56108b063240a10b8855a78dc11356463246008dc0bd" gracePeriod=2 Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.775756 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.776508 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.818593 4861 generic.go:334] "Generic (PLEG): container finished" podID="b991de0d-062c-4d05-9038-7330404ab19d" containerID="137143c339c6a461e88d56108b063240a10b8855a78dc11356463246008dc0bd" exitCode=0 Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.818668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerDied","Data":"137143c339c6a461e88d56108b063240a10b8855a78dc11356463246008dc0bd"} Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.870724 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.961132 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.969983 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-utilities\") pod \"b991de0d-062c-4d05-9038-7330404ab19d\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.970050 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-catalog-content\") pod \"b991de0d-062c-4d05-9038-7330404ab19d\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.970149 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbhk\" (UniqueName: \"kubernetes.io/projected/b991de0d-062c-4d05-9038-7330404ab19d-kube-api-access-sbbhk\") pod \"b991de0d-062c-4d05-9038-7330404ab19d\" (UID: \"b991de0d-062c-4d05-9038-7330404ab19d\") " Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.971623 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-utilities" (OuterVolumeSpecName: "utilities") pod "b991de0d-062c-4d05-9038-7330404ab19d" (UID: "b991de0d-062c-4d05-9038-7330404ab19d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:23:57 crc kubenswrapper[4861]: I0219 13:23:57.984077 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b991de0d-062c-4d05-9038-7330404ab19d-kube-api-access-sbbhk" (OuterVolumeSpecName: "kube-api-access-sbbhk") pod "b991de0d-062c-4d05-9038-7330404ab19d" (UID: "b991de0d-062c-4d05-9038-7330404ab19d"). InnerVolumeSpecName "kube-api-access-sbbhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.044082 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b991de0d-062c-4d05-9038-7330404ab19d" (UID: "b991de0d-062c-4d05-9038-7330404ab19d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.071634 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbhk\" (UniqueName: \"kubernetes.io/projected/b991de0d-062c-4d05-9038-7330404ab19d-kube-api-access-sbbhk\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.071692 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.071715 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b991de0d-062c-4d05-9038-7330404ab19d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.673494 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts"] Feb 19 13:23:58 crc kubenswrapper[4861]: E0219 13:23:58.673929 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" containerName="console" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.673971 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" containerName="console" Feb 19 13:23:58 crc kubenswrapper[4861]: E0219 13:23:58.674010 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="registry-server" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.674025 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="registry-server" Feb 19 13:23:58 crc kubenswrapper[4861]: E0219 13:23:58.674052 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="extract-utilities" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.674067 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="extract-utilities" Feb 19 13:23:58 crc kubenswrapper[4861]: E0219 13:23:58.674099 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="extract-content" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.674115 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="extract-content" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.674367 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f1cff3-40b4-43b3-8fb9-a7a34db35e7b" containerName="console" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.674414 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b991de0d-062c-4d05-9038-7330404ab19d" containerName="registry-server" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.676218 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.679748 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.681840 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkd7s\" (UniqueName: \"kubernetes.io/projected/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-kube-api-access-qkd7s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.681917 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.681963 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.686711 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts"] Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.784356 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkd7s\" (UniqueName: \"kubernetes.io/projected/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-kube-api-access-qkd7s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.785355 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.785660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.786304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.787960 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.817655 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkd7s\" (UniqueName: \"kubernetes.io/projected/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-kube-api-access-qkd7s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.832048 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j29r" event={"ID":"b991de0d-062c-4d05-9038-7330404ab19d","Type":"ContainerDied","Data":"dd84123cbee9b59b21aad2723440caf8a40d519a81192f0eabebe628dd38b3c4"} Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.832114 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j29r" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.833058 4861 scope.go:117] "RemoveContainer" containerID="137143c339c6a461e88d56108b063240a10b8855a78dc11356463246008dc0bd" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.868182 4861 scope.go:117] "RemoveContainer" containerID="1ec595f29c7a34fdabb406b43eff061d46ae8abcf39543f119704cded5ca9588" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.897129 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j29r"] Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.904339 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2j29r"] Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.918593 4861 scope.go:117] "RemoveContainer" containerID="ecd26e72b3cfb1533c1d4673082475e138253300292ae24d772331580ccbf2ad" Feb 19 13:23:58 crc kubenswrapper[4861]: I0219 13:23:58.921954 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:23:59 crc kubenswrapper[4861]: I0219 13:23:59.009006 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:23:59 crc kubenswrapper[4861]: I0219 13:23:59.302228 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts"] Feb 19 13:23:59 crc kubenswrapper[4861]: I0219 13:23:59.841930 4861 generic.go:334] "Generic (PLEG): container finished" podID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerID="c23004ac8aa248def89e84954341cfdfb2aaf7b601ef60a84a4e4354f3f61d16" exitCode=0 Feb 19 13:23:59 crc kubenswrapper[4861]: I0219 13:23:59.842001 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" event={"ID":"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4","Type":"ContainerDied","Data":"c23004ac8aa248def89e84954341cfdfb2aaf7b601ef60a84a4e4354f3f61d16"} Feb 19 13:23:59 crc kubenswrapper[4861]: I0219 13:23:59.842070 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" event={"ID":"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4","Type":"ContainerStarted","Data":"e8d2f522d3d2167722e401c6a3a58dfe1a058c542666ac6065700d5f766a43b2"} Feb 19 13:23:59 crc kubenswrapper[4861]: I0219 13:23:59.991584 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b991de0d-062c-4d05-9038-7330404ab19d" path="/var/lib/kubelet/pods/b991de0d-062c-4d05-9038-7330404ab19d/volumes" Feb 19 13:24:01 crc kubenswrapper[4861]: I0219 13:24:01.601978 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb7gv"] Feb 19 13:24:01 crc kubenswrapper[4861]: I0219 13:24:01.602839 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rb7gv" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="registry-server" containerID="cri-o://a0bc47ad3871dfc8e060c49ebbdea8930bb554c673b0f03de57bb17468ad4749" gracePeriod=2 Feb 19 13:24:01 crc kubenswrapper[4861]: I0219 13:24:01.863380 4861 generic.go:334] "Generic (PLEG): container finished" podID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerID="a0bc47ad3871dfc8e060c49ebbdea8930bb554c673b0f03de57bb17468ad4749" exitCode=0 Feb 19 13:24:01 crc kubenswrapper[4861]: I0219 13:24:01.863487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb7gv" event={"ID":"dafff9fe-7428-4995-a1a2-acebf52562b1","Type":"ContainerDied","Data":"a0bc47ad3871dfc8e060c49ebbdea8930bb554c673b0f03de57bb17468ad4749"} Feb 19 13:24:01 crc kubenswrapper[4861]: I0219 13:24:01.867079 4861 generic.go:334] "Generic (PLEG): container finished" podID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerID="722c2a3f0ccf6eec8fd1a8ff25098b4a2a536bc12bc1922d303cf49c2199421b" exitCode=0 Feb 19 13:24:01 crc kubenswrapper[4861]: I0219 13:24:01.867123 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" event={"ID":"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4","Type":"ContainerDied","Data":"722c2a3f0ccf6eec8fd1a8ff25098b4a2a536bc12bc1922d303cf49c2199421b"} Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.069343 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.140042 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-utilities\") pod \"dafff9fe-7428-4995-a1a2-acebf52562b1\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.140587 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-catalog-content\") pod \"dafff9fe-7428-4995-a1a2-acebf52562b1\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.140724 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgw4\" (UniqueName: \"kubernetes.io/projected/dafff9fe-7428-4995-a1a2-acebf52562b1-kube-api-access-fxgw4\") pod \"dafff9fe-7428-4995-a1a2-acebf52562b1\" (UID: \"dafff9fe-7428-4995-a1a2-acebf52562b1\") " Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.140907 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-utilities" (OuterVolumeSpecName: "utilities") pod "dafff9fe-7428-4995-a1a2-acebf52562b1" (UID: "dafff9fe-7428-4995-a1a2-acebf52562b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.141280 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.150817 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafff9fe-7428-4995-a1a2-acebf52562b1-kube-api-access-fxgw4" (OuterVolumeSpecName: "kube-api-access-fxgw4") pod "dafff9fe-7428-4995-a1a2-acebf52562b1" (UID: "dafff9fe-7428-4995-a1a2-acebf52562b1"). InnerVolumeSpecName "kube-api-access-fxgw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.172621 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dafff9fe-7428-4995-a1a2-acebf52562b1" (UID: "dafff9fe-7428-4995-a1a2-acebf52562b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.242767 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgw4\" (UniqueName: \"kubernetes.io/projected/dafff9fe-7428-4995-a1a2-acebf52562b1-kube-api-access-fxgw4\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.242972 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafff9fe-7428-4995-a1a2-acebf52562b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.877415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb7gv" event={"ID":"dafff9fe-7428-4995-a1a2-acebf52562b1","Type":"ContainerDied","Data":"3766bb49f6925b977ef876cd1d1fe02ef60acb75d4aa84cafa12d088a5fac1a6"} Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.879281 4861 scope.go:117] "RemoveContainer" containerID="a0bc47ad3871dfc8e060c49ebbdea8930bb554c673b0f03de57bb17468ad4749" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.879220 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb7gv" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.884067 4861 generic.go:334] "Generic (PLEG): container finished" podID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerID="08e67f299c3c74c2497bae0bedff7750bc9ff25a358a66e4e183f37cb48cbe8b" exitCode=0 Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.884133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" event={"ID":"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4","Type":"ContainerDied","Data":"08e67f299c3c74c2497bae0bedff7750bc9ff25a358a66e4e183f37cb48cbe8b"} Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.951270 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb7gv"] Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.954006 4861 scope.go:117] "RemoveContainer" containerID="99a406e3c6d9d030518b4d1533ce5360882c46445753ac23544fafd4119554ac" Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.954633 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb7gv"] Feb 19 13:24:02 crc kubenswrapper[4861]: I0219 13:24:02.976679 4861 scope.go:117] "RemoveContainer" containerID="fb9c18d777344a68489e242f706bb9481a1e76cba99c41add80cf0050be856d5" Feb 19 13:24:03 crc kubenswrapper[4861]: I0219 13:24:03.834696 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:24:03 crc kubenswrapper[4861]: I0219 13:24:03.834801 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:24:03 crc kubenswrapper[4861]: I0219 13:24:03.834876 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:24:03 crc kubenswrapper[4861]: I0219 13:24:03.835834 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"172ce433d46e388504efbd8038cf7a4f97b7e544c89545b0b9a675e189350528"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:24:03 crc kubenswrapper[4861]: I0219 13:24:03.835944 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://172ce433d46e388504efbd8038cf7a4f97b7e544c89545b0b9a675e189350528" gracePeriod=600 Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.036876 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" path="/var/lib/kubelet/pods/dafff9fe-7428-4995-a1a2-acebf52562b1/volumes" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.212712 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.274683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-bundle\") pod \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.274757 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-util\") pod \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.274824 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkd7s\" (UniqueName: \"kubernetes.io/projected/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-kube-api-access-qkd7s\") pod \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\" (UID: \"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4\") " Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.276280 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-bundle" (OuterVolumeSpecName: "bundle") pod "f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" (UID: "f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.284476 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-kube-api-access-qkd7s" (OuterVolumeSpecName: "kube-api-access-qkd7s") pod "f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" (UID: "f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4"). InnerVolumeSpecName "kube-api-access-qkd7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.308954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-util" (OuterVolumeSpecName: "util") pod "f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" (UID: "f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.376005 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.376037 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-util\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.376046 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkd7s\" (UniqueName: \"kubernetes.io/projected/f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4-kube-api-access-qkd7s\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.904778 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" event={"ID":"f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4","Type":"ContainerDied","Data":"e8d2f522d3d2167722e401c6a3a58dfe1a058c542666ac6065700d5f766a43b2"} Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.904827 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.904843 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d2f522d3d2167722e401c6a3a58dfe1a058c542666ac6065700d5f766a43b2" Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.908167 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="172ce433d46e388504efbd8038cf7a4f97b7e544c89545b0b9a675e189350528" exitCode=0 Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.908212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"172ce433d46e388504efbd8038cf7a4f97b7e544c89545b0b9a675e189350528"} Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.908240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"b97bdd517e8a4057d6d42657d06891ca0d7f0204df355e8596a23050ecb1ab6b"} Feb 19 13:24:04 crc kubenswrapper[4861]: I0219 13:24:04.908261 4861 scope.go:117] "RemoveContainer" containerID="25ef2498d1603371d170c7ba58d926ede3a215c63cff356671e923ef191e3ea4" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.017979 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t"] Feb 19 13:24:15 crc kubenswrapper[4861]: E0219 13:24:15.018998 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="pull" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019016 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="pull" Feb 19 13:24:15 crc kubenswrapper[4861]: E0219 13:24:15.019028 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="registry-server" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019037 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="registry-server" Feb 19 13:24:15 crc kubenswrapper[4861]: E0219 13:24:15.019054 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="extract-content" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019063 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="extract-content" Feb 19 13:24:15 crc kubenswrapper[4861]: E0219 13:24:15.019076 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="extract-utilities" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019085 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="extract-utilities" Feb 19 13:24:15 crc kubenswrapper[4861]: E0219 13:24:15.019109 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="extract" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019126 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="extract" Feb 19 13:24:15 crc kubenswrapper[4861]: E0219 13:24:15.019153 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="util" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019166 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="util" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019325 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4" containerName="extract" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019345 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafff9fe-7428-4995-a1a2-acebf52562b1" containerName="registry-server" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.019967 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.022349 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.022596 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.022726 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.022763 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-29vqs" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.024621 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.038227 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t"] Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.141581 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d342a353-dfb8-4e53-92a9-025e4bfbe49b-webhook-cert\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.141636 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d342a353-dfb8-4e53-92a9-025e4bfbe49b-apiservice-cert\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.141664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zkx\" (UniqueName: \"kubernetes.io/projected/d342a353-dfb8-4e53-92a9-025e4bfbe49b-kube-api-access-j9zkx\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.242571 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d342a353-dfb8-4e53-92a9-025e4bfbe49b-webhook-cert\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.242629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d342a353-dfb8-4e53-92a9-025e4bfbe49b-apiservice-cert\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.242658 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zkx\" (UniqueName: \"kubernetes.io/projected/d342a353-dfb8-4e53-92a9-025e4bfbe49b-kube-api-access-j9zkx\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.254582 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d342a353-dfb8-4e53-92a9-025e4bfbe49b-webhook-cert\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.255060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d342a353-dfb8-4e53-92a9-025e4bfbe49b-apiservice-cert\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.265579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zkx\" (UniqueName: \"kubernetes.io/projected/d342a353-dfb8-4e53-92a9-025e4bfbe49b-kube-api-access-j9zkx\") pod \"metallb-operator-controller-manager-7478dc68cb-jcz8t\" (UID: \"d342a353-dfb8-4e53-92a9-025e4bfbe49b\") " pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.336848 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.376707 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v"] Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.377668 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.383999 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-h9wn9" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.384211 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.384354 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.397396 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v"] Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.445637 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnv2\" (UniqueName: \"kubernetes.io/projected/78108700-377c-4f89-807d-ea987304a48f-kube-api-access-jmnv2\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.445692 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78108700-377c-4f89-807d-ea987304a48f-apiservice-cert\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.445760 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78108700-377c-4f89-807d-ea987304a48f-webhook-cert\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.547142 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78108700-377c-4f89-807d-ea987304a48f-webhook-cert\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.547637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmnv2\" (UniqueName: \"kubernetes.io/projected/78108700-377c-4f89-807d-ea987304a48f-kube-api-access-jmnv2\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.547822 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78108700-377c-4f89-807d-ea987304a48f-apiservice-cert\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.551570 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78108700-377c-4f89-807d-ea987304a48f-apiservice-cert\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.554038 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78108700-377c-4f89-807d-ea987304a48f-webhook-cert\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.567580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmnv2\" (UniqueName: \"kubernetes.io/projected/78108700-377c-4f89-807d-ea987304a48f-kube-api-access-jmnv2\") pod \"metallb-operator-webhook-server-859d6bbc66-87l7v\" (UID: \"78108700-377c-4f89-807d-ea987304a48f\") " pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.713702 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.853043 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t"] Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.952114 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v"] Feb 19 13:24:15 crc kubenswrapper[4861]: W0219 13:24:15.959343 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78108700_377c_4f89_807d_ea987304a48f.slice/crio-e74823dbddf1b73d367dd3f4975e1787f10b0f6350a82f502b0b4343a174d8cb WatchSource:0}: Error finding container e74823dbddf1b73d367dd3f4975e1787f10b0f6350a82f502b0b4343a174d8cb: Status 404 returned error can't find the container with id e74823dbddf1b73d367dd3f4975e1787f10b0f6350a82f502b0b4343a174d8cb Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.997594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" event={"ID":"78108700-377c-4f89-807d-ea987304a48f","Type":"ContainerStarted","Data":"e74823dbddf1b73d367dd3f4975e1787f10b0f6350a82f502b0b4343a174d8cb"} Feb 19 13:24:15 crc kubenswrapper[4861]: I0219 13:24:15.999445 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" event={"ID":"d342a353-dfb8-4e53-92a9-025e4bfbe49b","Type":"ContainerStarted","Data":"41706dba6718c8f30144fdb607006857583338eec627aef00d131f6a29919980"} Feb 19 13:24:20 crc kubenswrapper[4861]: I0219 13:24:20.835933 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgn4g"] Feb 19 13:24:20 crc kubenswrapper[4861]: I0219 13:24:20.837464 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:20 crc kubenswrapper[4861]: I0219 13:24:20.861038 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgn4g"] Feb 19 13:24:20 crc kubenswrapper[4861]: I0219 13:24:20.936866 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-catalog-content\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:20 crc kubenswrapper[4861]: I0219 13:24:20.937407 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwpzt\" (UniqueName: \"kubernetes.io/projected/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-kube-api-access-pwpzt\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:20 crc kubenswrapper[4861]: I0219 13:24:20.937470 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-utilities\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.033141 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" event={"ID":"78108700-377c-4f89-807d-ea987304a48f","Type":"ContainerStarted","Data":"3d26f6a5e03272bbb0d994c6a0799ba8d74eec36fa8982bef96420a7c4a940b2"} Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.033284 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.035213 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" event={"ID":"d342a353-dfb8-4e53-92a9-025e4bfbe49b","Type":"ContainerStarted","Data":"7e67b0acc5965795cf7ac6a491a40a3facf9d164bffe009b59db943359688fb6"} Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.035399 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.038975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-catalog-content\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.039646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwpzt\" (UniqueName: \"kubernetes.io/projected/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-kube-api-access-pwpzt\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.039759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-catalog-content\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.039683 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-utilities\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.040056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-utilities\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.055405 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" podStartSLOduration=1.181757122 podStartE2EDuration="6.055388314s" podCreationTimestamp="2026-02-19 13:24:15 +0000 UTC" firstStartedPulling="2026-02-19 13:24:15.963322437 +0000 UTC m=+870.624425665" lastFinishedPulling="2026-02-19 13:24:20.836953609 +0000 UTC m=+875.498056857" observedRunningTime="2026-02-19 13:24:21.052128695 +0000 UTC m=+875.713231923" watchObservedRunningTime="2026-02-19 13:24:21.055388314 +0000 UTC m=+875.716491542" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.083950 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwpzt\" (UniqueName: \"kubernetes.io/projected/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-kube-api-access-pwpzt\") pod \"certified-operators-tgn4g\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.084192 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" podStartSLOduration=1.171866592 podStartE2EDuration="6.084179632s" podCreationTimestamp="2026-02-19 13:24:15 +0000 UTC" firstStartedPulling="2026-02-19 13:24:15.885482418 +0000 UTC m=+870.546585646" lastFinishedPulling="2026-02-19 13:24:20.797795428 +0000 UTC m=+875.458898686" observedRunningTime="2026-02-19 13:24:21.08118325 +0000 UTC m=+875.742286478" watchObservedRunningTime="2026-02-19 13:24:21.084179632 +0000 UTC m=+875.745282860" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.275828 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:21 crc kubenswrapper[4861]: I0219 13:24:21.543546 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgn4g"] Feb 19 13:24:22 crc kubenswrapper[4861]: I0219 13:24:22.041850 4861 generic.go:334] "Generic (PLEG): container finished" podID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerID="4fc806692ac80325104057f9868f40dfd771f6dbc215f1917bc3b0df676ea190" exitCode=0 Feb 19 13:24:22 crc kubenswrapper[4861]: I0219 13:24:22.041959 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerDied","Data":"4fc806692ac80325104057f9868f40dfd771f6dbc215f1917bc3b0df676ea190"} Feb 19 13:24:22 crc kubenswrapper[4861]: I0219 13:24:22.042511 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerStarted","Data":"de079796cd73e50540cafa94b123f016fd6ce7fccc593d840f11074c103f9772"} Feb 19 13:24:23 crc kubenswrapper[4861]: I0219 13:24:23.052484 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerStarted","Data":"9a6b56ba5ab57a9f6009534ebf638e2ef109ea0b72e837976c5c240cde05078c"} Feb 19 13:24:24 crc kubenswrapper[4861]: I0219 13:24:24.060562 4861 generic.go:334] "Generic (PLEG): container finished" podID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerID="9a6b56ba5ab57a9f6009534ebf638e2ef109ea0b72e837976c5c240cde05078c" exitCode=0 Feb 19 13:24:24 crc kubenswrapper[4861]: I0219 13:24:24.060631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerDied","Data":"9a6b56ba5ab57a9f6009534ebf638e2ef109ea0b72e837976c5c240cde05078c"} Feb 19 13:24:25 crc kubenswrapper[4861]: I0219 13:24:25.069919 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerStarted","Data":"425487f97ef4116a4c5c37ee88a4206926a8b0f132f1a460674267c9c882809e"} Feb 19 13:24:25 crc kubenswrapper[4861]: I0219 13:24:25.114841 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgn4g" podStartSLOduration=2.716913673 podStartE2EDuration="5.114816395s" podCreationTimestamp="2026-02-19 13:24:20 +0000 UTC" firstStartedPulling="2026-02-19 13:24:22.043685368 +0000 UTC m=+876.704788596" lastFinishedPulling="2026-02-19 13:24:24.44158809 +0000 UTC m=+879.102691318" observedRunningTime="2026-02-19 13:24:25.107178216 +0000 UTC m=+879.768281534" watchObservedRunningTime="2026-02-19 13:24:25.114816395 +0000 UTC m=+879.775919623" Feb 19 13:24:31 crc kubenswrapper[4861]: I0219 13:24:31.276307 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:31 crc kubenswrapper[4861]: I0219 13:24:31.277171 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:31 crc kubenswrapper[4861]: I0219 13:24:31.349251 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:32 crc kubenswrapper[4861]: I0219 13:24:32.187238 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:33 crc kubenswrapper[4861]: I0219 13:24:33.597934 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgn4g"] Feb 19 13:24:34 crc kubenswrapper[4861]: I0219 13:24:34.135980 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tgn4g" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="registry-server" containerID="cri-o://425487f97ef4116a4c5c37ee88a4206926a8b0f132f1a460674267c9c882809e" gracePeriod=2 Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.144194 4861 generic.go:334] "Generic (PLEG): container finished" podID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerID="425487f97ef4116a4c5c37ee88a4206926a8b0f132f1a460674267c9c882809e" exitCode=0 Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.144278 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerDied","Data":"425487f97ef4116a4c5c37ee88a4206926a8b0f132f1a460674267c9c882809e"} Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.144644 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgn4g" event={"ID":"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3","Type":"ContainerDied","Data":"de079796cd73e50540cafa94b123f016fd6ce7fccc593d840f11074c103f9772"} Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.144664 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de079796cd73e50540cafa94b123f016fd6ce7fccc593d840f11074c103f9772" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.161926 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.258032 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-catalog-content\") pod \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.258171 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-utilities\") pod \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.258211 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwpzt\" (UniqueName: \"kubernetes.io/projected/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-kube-api-access-pwpzt\") pod \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\" (UID: \"d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3\") " Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.259313 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-utilities" (OuterVolumeSpecName: "utilities") pod "d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" (UID: "d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.264908 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-kube-api-access-pwpzt" (OuterVolumeSpecName: "kube-api-access-pwpzt") pod "d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" (UID: "d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3"). InnerVolumeSpecName "kube-api-access-pwpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.321997 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" (UID: "d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.359984 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.360043 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.360058 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwpzt\" (UniqueName: \"kubernetes.io/projected/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3-kube-api-access-pwpzt\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:35 crc kubenswrapper[4861]: I0219 13:24:35.720855 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-859d6bbc66-87l7v" Feb 19 13:24:36 crc kubenswrapper[4861]: I0219 13:24:36.152163 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgn4g" Feb 19 13:24:36 crc kubenswrapper[4861]: I0219 13:24:36.183509 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgn4g"] Feb 19 13:24:36 crc kubenswrapper[4861]: I0219 13:24:36.192669 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tgn4g"] Feb 19 13:24:38 crc kubenswrapper[4861]: I0219 13:24:38.024915 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" path="/var/lib/kubelet/pods/d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3/volumes" Feb 19 13:24:55 crc kubenswrapper[4861]: I0219 13:24:55.341087 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7478dc68cb-jcz8t" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.156970 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kwb5k"] Feb 19 13:24:56 crc kubenswrapper[4861]: E0219 13:24:56.157786 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="registry-server" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.157825 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="registry-server" Feb 19 13:24:56 crc kubenswrapper[4861]: E0219 13:24:56.157855 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="extract-utilities" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.157868 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="extract-utilities" Feb 19 13:24:56 crc kubenswrapper[4861]: E0219 13:24:56.157894 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="extract-content" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.157911 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="extract-content" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.158099 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78b5458-5b3d-4cb0-b5bd-a7b4985aedc3" containerName="registry-server" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.161658 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.162524 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c"] Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.163371 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.165102 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.165403 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.166234 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nc964" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.166598 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.173630 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c"] Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.268249 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6dkdq"] Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.273504 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.277095 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-95dm4" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.277259 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-ppsgt"] Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.278096 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.280966 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.282199 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.285524 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6282f27-6c36-4b95-b3d8-32be4da3efec-metrics-certs\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289168 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-reloader\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289197 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-conf\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289230 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5pn\" (UniqueName: \"kubernetes.io/projected/a6282f27-6c36-4b95-b3d8-32be4da3efec-kube-api-access-7l5pn\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289367 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-sockets\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289409 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-metrics\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289455 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/346acb4a-b1d2-4ac4-937a-142dc81f5633-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-lvz2c\" (UID: \"346acb4a-b1d2-4ac4-937a-142dc81f5633\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289493 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9sdh\" (UniqueName: \"kubernetes.io/projected/346acb4a-b1d2-4ac4-937a-142dc81f5633-kube-api-access-c9sdh\") pod \"frr-k8s-webhook-server-78b44bf5bb-lvz2c\" (UID: \"346acb4a-b1d2-4ac4-937a-142dc81f5633\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-startup\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.289890 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.291000 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-ppsgt"] Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390370 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9sdh\" (UniqueName: \"kubernetes.io/projected/346acb4a-b1d2-4ac4-937a-142dc81f5633-kube-api-access-c9sdh\") pod \"frr-k8s-webhook-server-78b44bf5bb-lvz2c\" (UID: \"346acb4a-b1d2-4ac4-937a-142dc81f5633\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-metrics-certs\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390453 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-startup\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn5w\" (UniqueName: \"kubernetes.io/projected/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-kube-api-access-zgn5w\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390680 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6282f27-6c36-4b95-b3d8-32be4da3efec-metrics-certs\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390717 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-reloader\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-conf\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-cert\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5pn\" (UniqueName: \"kubernetes.io/projected/a6282f27-6c36-4b95-b3d8-32be4da3efec-kube-api-access-7l5pn\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390959 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-metallb-excludel2\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.390987 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtbm\" (UniqueName: \"kubernetes.io/projected/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-kube-api-access-njtbm\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391040 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391065 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-sockets\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391101 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-metrics\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391126 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/346acb4a-b1d2-4ac4-937a-142dc81f5633-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-lvz2c\" (UID: \"346acb4a-b1d2-4ac4-937a-142dc81f5633\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-metrics-certs\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-reloader\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391382 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-startup\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-sockets\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-metrics\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.391900 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a6282f27-6c36-4b95-b3d8-32be4da3efec-frr-conf\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.399685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6282f27-6c36-4b95-b3d8-32be4da3efec-metrics-certs\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.400077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/346acb4a-b1d2-4ac4-937a-142dc81f5633-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-lvz2c\" (UID: \"346acb4a-b1d2-4ac4-937a-142dc81f5633\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.417484 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9sdh\" (UniqueName: \"kubernetes.io/projected/346acb4a-b1d2-4ac4-937a-142dc81f5633-kube-api-access-c9sdh\") pod \"frr-k8s-webhook-server-78b44bf5bb-lvz2c\" (UID: \"346acb4a-b1d2-4ac4-937a-142dc81f5633\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.422216 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5pn\" (UniqueName: \"kubernetes.io/projected/a6282f27-6c36-4b95-b3d8-32be4da3efec-kube-api-access-7l5pn\") pod \"frr-k8s-kwb5k\" (UID: \"a6282f27-6c36-4b95-b3d8-32be4da3efec\") " pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.483502 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.494750 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495075 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-metrics-certs\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn5w\" (UniqueName: \"kubernetes.io/projected/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-kube-api-access-zgn5w\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495213 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-cert\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495265 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-metallb-excludel2\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtbm\" (UniqueName: \"kubernetes.io/projected/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-kube-api-access-njtbm\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495306 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.495327 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-metrics-certs\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.496400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-metallb-excludel2\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: E0219 13:24:56.496506 4861 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 13:24:56 crc kubenswrapper[4861]: E0219 13:24:56.496556 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist podName:0f25c16e-29a3-4f83-82cf-4c7fc841bff2 nodeName:}" failed. No retries permitted until 2026-02-19 13:24:56.996540222 +0000 UTC m=+911.657643450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist") pod "speaker-6dkdq" (UID: "0f25c16e-29a3-4f83-82cf-4c7fc841bff2") : secret "metallb-memberlist" not found Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.499365 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.499665 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-metrics-certs\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.500939 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-metrics-certs\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.515638 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtbm\" (UniqueName: \"kubernetes.io/projected/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-kube-api-access-njtbm\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.530141 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-cert\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.531713 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn5w\" (UniqueName: \"kubernetes.io/projected/3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a-kube-api-access-zgn5w\") pod \"controller-69bbfbf88f-ppsgt\" (UID: \"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a\") " pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.596958 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.738565 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c"] Feb 19 13:24:56 crc kubenswrapper[4861]: I0219 13:24:56.794694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-ppsgt"] Feb 19 13:24:56 crc kubenswrapper[4861]: W0219 13:24:56.799964 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7a7e02_cb5b_4ac8_bb6f_cb569822f54a.slice/crio-10c8a635bfcbb9a9f8c32715f5147361550373b42fd6d6f398980b1ad0019fb3 WatchSource:0}: Error finding container 10c8a635bfcbb9a9f8c32715f5147361550373b42fd6d6f398980b1ad0019fb3: Status 404 returned error can't find the container with id 10c8a635bfcbb9a9f8c32715f5147361550373b42fd6d6f398980b1ad0019fb3 Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.002514 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:57 crc kubenswrapper[4861]: E0219 13:24:57.002900 4861 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 13:24:57 crc kubenswrapper[4861]: E0219 13:24:57.002996 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist podName:0f25c16e-29a3-4f83-82cf-4c7fc841bff2 nodeName:}" failed. No retries permitted until 2026-02-19 13:24:58.002973511 +0000 UTC m=+912.664076759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist") pod "speaker-6dkdq" (UID: "0f25c16e-29a3-4f83-82cf-4c7fc841bff2") : secret "metallb-memberlist" not found Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.322867 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-ppsgt" event={"ID":"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a","Type":"ContainerStarted","Data":"9e67be2576586c55bb1d871736b7406cdaa6f0c19a17b780d5ef8d2fd69e61ed"} Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.322937 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-ppsgt" event={"ID":"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a","Type":"ContainerStarted","Data":"f336073771b700ad1a8da9851d4bc5d8af9d734fac7c808cc95e245cfdcc6fa5"} Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.322973 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-ppsgt" event={"ID":"3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a","Type":"ContainerStarted","Data":"10c8a635bfcbb9a9f8c32715f5147361550373b42fd6d6f398980b1ad0019fb3"} Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.323020 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.324369 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" event={"ID":"346acb4a-b1d2-4ac4-937a-142dc81f5633","Type":"ContainerStarted","Data":"d34d4526b4a986fb7affe8ff894fd9021e54689357ec0c7e782a30aec2886b45"} Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.326063 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"a94e3802048442982d9d86cc83b497638af7dbaca49dd1cc47ade1ed16c300e9"} Feb 19 13:24:57 crc kubenswrapper[4861]: I0219 13:24:57.349397 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-ppsgt" podStartSLOduration=1.349374367 podStartE2EDuration="1.349374367s" podCreationTimestamp="2026-02-19 13:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:24:57.344814802 +0000 UTC m=+912.005918040" watchObservedRunningTime="2026-02-19 13:24:57.349374367 +0000 UTC m=+912.010477615" Feb 19 13:24:58 crc kubenswrapper[4861]: I0219 13:24:58.020318 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:58 crc kubenswrapper[4861]: I0219 13:24:58.026842 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0f25c16e-29a3-4f83-82cf-4c7fc841bff2-memberlist\") pod \"speaker-6dkdq\" (UID: \"0f25c16e-29a3-4f83-82cf-4c7fc841bff2\") " pod="metallb-system/speaker-6dkdq" Feb 19 13:24:58 crc kubenswrapper[4861]: I0219 13:24:58.088736 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6dkdq" Feb 19 13:24:58 crc kubenswrapper[4861]: I0219 13:24:58.333382 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6dkdq" event={"ID":"0f25c16e-29a3-4f83-82cf-4c7fc841bff2","Type":"ContainerStarted","Data":"4519bcf0885ed9cbe8583dc1769dc7a6f6ff302dd7f03c123adc7105aebd1738"} Feb 19 13:24:59 crc kubenswrapper[4861]: I0219 13:24:59.339556 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6dkdq" event={"ID":"0f25c16e-29a3-4f83-82cf-4c7fc841bff2","Type":"ContainerStarted","Data":"742b3d921b1cbdf6ebd814620883990ecb8f2b388c4ce9e9bdcaa31e3d85820d"} Feb 19 13:24:59 crc kubenswrapper[4861]: I0219 13:24:59.339874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6dkdq" event={"ID":"0f25c16e-29a3-4f83-82cf-4c7fc841bff2","Type":"ContainerStarted","Data":"37255883dfff81f34d691e5bbed11bce4e29feec6777b9828eaa6e1d0672d255"} Feb 19 13:24:59 crc kubenswrapper[4861]: I0219 13:24:59.340681 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6dkdq" Feb 19 13:25:04 crc kubenswrapper[4861]: I0219 13:25:04.394711 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" event={"ID":"346acb4a-b1d2-4ac4-937a-142dc81f5633","Type":"ContainerStarted","Data":"60b40207a7cf68abb8ffb71dfffca2ed67d77e5e352d74df80316e5002884f5e"} Feb 19 13:25:04 crc kubenswrapper[4861]: I0219 13:25:04.395762 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:25:04 crc kubenswrapper[4861]: I0219 13:25:04.399065 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6282f27-6c36-4b95-b3d8-32be4da3efec" containerID="5c3d20fbf2bbf29d8aab623cf740817a45628563d89d8150ad2191cdd60fd9e7" exitCode=0 Feb 19 13:25:04 crc kubenswrapper[4861]: I0219 13:25:04.399127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerDied","Data":"5c3d20fbf2bbf29d8aab623cf740817a45628563d89d8150ad2191cdd60fd9e7"} Feb 19 13:25:04 crc kubenswrapper[4861]: I0219 13:25:04.430161 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6dkdq" podStartSLOduration=8.43012452 podStartE2EDuration="8.43012452s" podCreationTimestamp="2026-02-19 13:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:24:59.365639596 +0000 UTC m=+914.026742834" watchObservedRunningTime="2026-02-19 13:25:04.43012452 +0000 UTC m=+919.091227748" Feb 19 13:25:04 crc kubenswrapper[4861]: I0219 13:25:04.456112 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" podStartSLOduration=1.823687021 podStartE2EDuration="8.456090267s" podCreationTimestamp="2026-02-19 13:24:56 +0000 UTC" firstStartedPulling="2026-02-19 13:24:56.751642078 +0000 UTC m=+911.412745306" lastFinishedPulling="2026-02-19 13:25:03.384045324 +0000 UTC m=+918.045148552" observedRunningTime="2026-02-19 13:25:04.427031775 +0000 UTC m=+919.088135003" watchObservedRunningTime="2026-02-19 13:25:04.456090267 +0000 UTC m=+919.117193495" Feb 19 13:25:05 crc kubenswrapper[4861]: I0219 13:25:05.410675 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6282f27-6c36-4b95-b3d8-32be4da3efec" containerID="0600293baa625f6f2996a623d5d0c6d4f7fa546724e6b9aea9402670a4a719e4" exitCode=0 Feb 19 13:25:05 crc kubenswrapper[4861]: I0219 13:25:05.410758 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerDied","Data":"0600293baa625f6f2996a623d5d0c6d4f7fa546724e6b9aea9402670a4a719e4"} Feb 19 13:25:06 crc kubenswrapper[4861]: I0219 13:25:06.423876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerDied","Data":"9f90e562919564c32f1ef3c5570425b83df38734da51140dba59c7e0d3f123d7"} Feb 19 13:25:06 crc kubenswrapper[4861]: I0219 13:25:06.424202 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6282f27-6c36-4b95-b3d8-32be4da3efec" containerID="9f90e562919564c32f1ef3c5570425b83df38734da51140dba59c7e0d3f123d7" exitCode=0 Feb 19 13:25:06 crc kubenswrapper[4861]: I0219 13:25:06.607073 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-ppsgt" Feb 19 13:25:07 crc kubenswrapper[4861]: I0219 13:25:07.437241 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"10e8b23b7e406f52b34946cc73bcb9d311b6cfcfae13f56566aaaeca5f3018c0"} Feb 19 13:25:07 crc kubenswrapper[4861]: I0219 13:25:07.437289 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"f61946fb54b2acb5b5494ebfdd8f4cb59a3abc4a941c98a149f6737822bb5aa7"} Feb 19 13:25:07 crc kubenswrapper[4861]: I0219 13:25:07.437304 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"d28a7d3279481346842e9ec51fa0b286d30d7f355268c4e02870b7d59e52f854"} Feb 19 13:25:07 crc kubenswrapper[4861]: I0219 13:25:07.437316 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"0de3d4e31153f1a2326b44402110641ebfc6e9b359ab66867ee15a2edf8712eb"} Feb 19 13:25:08 crc kubenswrapper[4861]: I0219 13:25:08.093245 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6dkdq" Feb 19 13:25:08 crc kubenswrapper[4861]: I0219 13:25:08.451009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"7acc33486a375de1fa3432107cdc601f029397a09a460da82279f786ec4a7ee8"} Feb 19 13:25:08 crc kubenswrapper[4861]: I0219 13:25:08.451068 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kwb5k" event={"ID":"a6282f27-6c36-4b95-b3d8-32be4da3efec","Type":"ContainerStarted","Data":"b2b52ce5cfa02b67787d75fc2ee5ee9c04cf40a7c775e528c0ab783264a6c1a4"} Feb 19 13:25:08 crc kubenswrapper[4861]: I0219 13:25:08.452144 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:25:08 crc kubenswrapper[4861]: I0219 13:25:08.490683 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kwb5k" podStartSLOduration=5.742880992 podStartE2EDuration="12.49064687s" podCreationTimestamp="2026-02-19 13:24:56 +0000 UTC" firstStartedPulling="2026-02-19 13:24:56.63425685 +0000 UTC m=+911.295360078" lastFinishedPulling="2026-02-19 13:25:03.382022718 +0000 UTC m=+918.043125956" observedRunningTime="2026-02-19 13:25:08.485751775 +0000 UTC m=+923.146855043" watchObservedRunningTime="2026-02-19 13:25:08.49064687 +0000 UTC m=+923.151750138" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.662118 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8"] Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.663982 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.666101 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.675914 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8"] Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.799465 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.799549 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7sd\" (UniqueName: \"kubernetes.io/projected/68b57f3f-9684-4185-bc93-3f7b59ba1c68-kube-api-access-lz7sd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.799587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.901136 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.901204 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7sd\" (UniqueName: \"kubernetes.io/projected/68b57f3f-9684-4185-bc93-3f7b59ba1c68-kube-api-access-lz7sd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.901229 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.901827 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.902195 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.936337 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7sd\" (UniqueName: \"kubernetes.io/projected/68b57f3f-9684-4185-bc93-3f7b59ba1c68-kube-api-access-lz7sd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:09 crc kubenswrapper[4861]: I0219 13:25:09.986313 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:10 crc kubenswrapper[4861]: I0219 13:25:10.487480 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8"] Feb 19 13:25:11 crc kubenswrapper[4861]: I0219 13:25:11.476163 4861 generic.go:334] "Generic (PLEG): container finished" podID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerID="1e877b6386ec3f68982b54788cf1b3f617f6a63b3c83b996cc826d4989541bca" exitCode=0 Feb 19 13:25:11 crc kubenswrapper[4861]: I0219 13:25:11.476216 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" event={"ID":"68b57f3f-9684-4185-bc93-3f7b59ba1c68","Type":"ContainerDied","Data":"1e877b6386ec3f68982b54788cf1b3f617f6a63b3c83b996cc826d4989541bca"} Feb 19 13:25:11 crc kubenswrapper[4861]: I0219 13:25:11.476529 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" event={"ID":"68b57f3f-9684-4185-bc93-3f7b59ba1c68","Type":"ContainerStarted","Data":"ee45bbb391c6187a6212022bf0cd5e762217e1a141cb16700aad4a615420f946"} Feb 19 13:25:11 crc kubenswrapper[4861]: I0219 13:25:11.483898 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:25:11 crc kubenswrapper[4861]: I0219 13:25:11.546274 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:25:15 crc kubenswrapper[4861]: I0219 13:25:15.523599 4861 generic.go:334] "Generic (PLEG): container finished" podID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerID="99457b452875517034f9611cca7d61991f21c476fbff83d889306f83a4e65931" exitCode=0 Feb 19 13:25:15 crc kubenswrapper[4861]: I0219 13:25:15.523792 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" event={"ID":"68b57f3f-9684-4185-bc93-3f7b59ba1c68","Type":"ContainerDied","Data":"99457b452875517034f9611cca7d61991f21c476fbff83d889306f83a4e65931"} Feb 19 13:25:16 crc kubenswrapper[4861]: I0219 13:25:16.487810 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kwb5k" Feb 19 13:25:16 crc kubenswrapper[4861]: I0219 13:25:16.500365 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lvz2c" Feb 19 13:25:16 crc kubenswrapper[4861]: I0219 13:25:16.539394 4861 generic.go:334] "Generic (PLEG): container finished" podID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerID="859c8633a0be51d821831aa846f9cd3b77858854df8b534ffab3caca53d4d0d9" exitCode=0 Feb 19 13:25:16 crc kubenswrapper[4861]: I0219 13:25:16.539505 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" event={"ID":"68b57f3f-9684-4185-bc93-3f7b59ba1c68","Type":"ContainerDied","Data":"859c8633a0be51d821831aa846f9cd3b77858854df8b534ffab3caca53d4d0d9"} Feb 19 13:25:17 crc kubenswrapper[4861]: I0219 13:25:17.861702 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.030630 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7sd\" (UniqueName: \"kubernetes.io/projected/68b57f3f-9684-4185-bc93-3f7b59ba1c68-kube-api-access-lz7sd\") pod \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.030777 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-util\") pod \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.030855 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-bundle\") pod \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\" (UID: \"68b57f3f-9684-4185-bc93-3f7b59ba1c68\") " Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.033062 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-bundle" (OuterVolumeSpecName: "bundle") pod "68b57f3f-9684-4185-bc93-3f7b59ba1c68" (UID: "68b57f3f-9684-4185-bc93-3f7b59ba1c68"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.042327 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b57f3f-9684-4185-bc93-3f7b59ba1c68-kube-api-access-lz7sd" (OuterVolumeSpecName: "kube-api-access-lz7sd") pod "68b57f3f-9684-4185-bc93-3f7b59ba1c68" (UID: "68b57f3f-9684-4185-bc93-3f7b59ba1c68"). InnerVolumeSpecName "kube-api-access-lz7sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.046732 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-util" (OuterVolumeSpecName: "util") pod "68b57f3f-9684-4185-bc93-3f7b59ba1c68" (UID: "68b57f3f-9684-4185-bc93-3f7b59ba1c68"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.132681 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-util\") on node \"crc\" DevicePath \"\"" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.132723 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b57f3f-9684-4185-bc93-3f7b59ba1c68-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.132739 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7sd\" (UniqueName: \"kubernetes.io/projected/68b57f3f-9684-4185-bc93-3f7b59ba1c68-kube-api-access-lz7sd\") on node \"crc\" DevicePath \"\"" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.556192 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" event={"ID":"68b57f3f-9684-4185-bc93-3f7b59ba1c68","Type":"ContainerDied","Data":"ee45bbb391c6187a6212022bf0cd5e762217e1a141cb16700aad4a615420f946"} Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.556254 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee45bbb391c6187a6212022bf0cd5e762217e1a141cb16700aad4a615420f946" Feb 19 13:25:18 crc kubenswrapper[4861]: I0219 13:25:18.556271 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.959291 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9"] Feb 19 13:25:22 crc kubenswrapper[4861]: E0219 13:25:22.960158 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="pull" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.960176 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="pull" Feb 19 13:25:22 crc kubenswrapper[4861]: E0219 13:25:22.960191 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="util" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.960198 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="util" Feb 19 13:25:22 crc kubenswrapper[4861]: E0219 13:25:22.960215 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="extract" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.960224 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="extract" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.960355 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b57f3f-9684-4185-bc93-3f7b59ba1c68" containerName="extract" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.960901 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.966083 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.966627 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-cwvtt" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.966677 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 13:25:22 crc kubenswrapper[4861]: I0219 13:25:22.995651 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9"] Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.093104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d77c6620-645b-4be5-8ab9-78244fffe08e-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6vtl9\" (UID: \"d77c6620-645b-4be5-8ab9-78244fffe08e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.093154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frw7z\" (UniqueName: \"kubernetes.io/projected/d77c6620-645b-4be5-8ab9-78244fffe08e-kube-api-access-frw7z\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6vtl9\" (UID: \"d77c6620-645b-4be5-8ab9-78244fffe08e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.194742 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d77c6620-645b-4be5-8ab9-78244fffe08e-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6vtl9\" (UID: \"d77c6620-645b-4be5-8ab9-78244fffe08e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.194791 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frw7z\" (UniqueName: \"kubernetes.io/projected/d77c6620-645b-4be5-8ab9-78244fffe08e-kube-api-access-frw7z\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6vtl9\" (UID: \"d77c6620-645b-4be5-8ab9-78244fffe08e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.195302 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d77c6620-645b-4be5-8ab9-78244fffe08e-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6vtl9\" (UID: \"d77c6620-645b-4be5-8ab9-78244fffe08e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.227066 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frw7z\" (UniqueName: \"kubernetes.io/projected/d77c6620-645b-4be5-8ab9-78244fffe08e-kube-api-access-frw7z\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6vtl9\" (UID: \"d77c6620-645b-4be5-8ab9-78244fffe08e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.281702 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.518471 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9"] Feb 19 13:25:23 crc kubenswrapper[4861]: I0219 13:25:23.602284 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" event={"ID":"d77c6620-645b-4be5-8ab9-78244fffe08e","Type":"ContainerStarted","Data":"4e50e378788fffb900b12d4fd9a4e76235eea0e82581ad685813cb5eb9de8498"} Feb 19 13:25:27 crc kubenswrapper[4861]: I0219 13:25:27.628145 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" event={"ID":"d77c6620-645b-4be5-8ab9-78244fffe08e","Type":"ContainerStarted","Data":"4ecc7bcfac8ba287a61c66149088f2cdf5a5cc1133eb1fb8b8f9394601ffaf8e"} Feb 19 13:25:27 crc kubenswrapper[4861]: I0219 13:25:27.648894 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6vtl9" podStartSLOduration=2.058864351 podStartE2EDuration="5.648876751s" podCreationTimestamp="2026-02-19 13:25:22 +0000 UTC" firstStartedPulling="2026-02-19 13:25:23.534676601 +0000 UTC m=+938.195779829" lastFinishedPulling="2026-02-19 13:25:27.124689001 +0000 UTC m=+941.785792229" observedRunningTime="2026-02-19 13:25:27.644627413 +0000 UTC m=+942.305730641" watchObservedRunningTime="2026-02-19 13:25:27.648876751 +0000 UTC m=+942.309979979" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.417259 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-2djr8"] Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.419201 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.421822 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.421854 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cbz2m" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.422591 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.439944 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-2djr8"] Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.505238 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cdce8567-cfd8-4edd-b947-963540559559-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-2djr8\" (UID: \"cdce8567-cfd8-4edd-b947-963540559559\") " pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.505316 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmqnq\" (UniqueName: \"kubernetes.io/projected/cdce8567-cfd8-4edd-b947-963540559559-kube-api-access-nmqnq\") pod \"cert-manager-webhook-6888856db4-2djr8\" (UID: \"cdce8567-cfd8-4edd-b947-963540559559\") " pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.606818 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmqnq\" (UniqueName: \"kubernetes.io/projected/cdce8567-cfd8-4edd-b947-963540559559-kube-api-access-nmqnq\") pod \"cert-manager-webhook-6888856db4-2djr8\" (UID: \"cdce8567-cfd8-4edd-b947-963540559559\") " pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.606982 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cdce8567-cfd8-4edd-b947-963540559559-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-2djr8\" (UID: \"cdce8567-cfd8-4edd-b947-963540559559\") " pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.627162 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmqnq\" (UniqueName: \"kubernetes.io/projected/cdce8567-cfd8-4edd-b947-963540559559-kube-api-access-nmqnq\") pod \"cert-manager-webhook-6888856db4-2djr8\" (UID: \"cdce8567-cfd8-4edd-b947-963540559559\") " pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.635458 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cdce8567-cfd8-4edd-b947-963540559559-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-2djr8\" (UID: \"cdce8567-cfd8-4edd-b947-963540559559\") " pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:30 crc kubenswrapper[4861]: I0219 13:25:30.740829 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:31 crc kubenswrapper[4861]: I0219 13:25:31.196246 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-2djr8"] Feb 19 13:25:31 crc kubenswrapper[4861]: I0219 13:25:31.656285 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" event={"ID":"cdce8567-cfd8-4edd-b947-963540559559","Type":"ContainerStarted","Data":"9c5781d61d92add71994de81852664ed298a5aa92148641e0a411227e6a6a39a"} Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.498909 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-rz64v"] Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.500911 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.503441 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bkxbr" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.506892 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-rz64v"] Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.660256 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtpt\" (UniqueName: \"kubernetes.io/projected/548bbe15-d06a-4ea3-8d65-e0a726a23b06-kube-api-access-lbtpt\") pod \"cert-manager-cainjector-5545bd876-rz64v\" (UID: \"548bbe15-d06a-4ea3-8d65-e0a726a23b06\") " pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.660768 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/548bbe15-d06a-4ea3-8d65-e0a726a23b06-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-rz64v\" (UID: \"548bbe15-d06a-4ea3-8d65-e0a726a23b06\") " pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.767843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/548bbe15-d06a-4ea3-8d65-e0a726a23b06-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-rz64v\" (UID: \"548bbe15-d06a-4ea3-8d65-e0a726a23b06\") " pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.767950 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbtpt\" (UniqueName: \"kubernetes.io/projected/548bbe15-d06a-4ea3-8d65-e0a726a23b06-kube-api-access-lbtpt\") pod \"cert-manager-cainjector-5545bd876-rz64v\" (UID: \"548bbe15-d06a-4ea3-8d65-e0a726a23b06\") " pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.799966 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbtpt\" (UniqueName: \"kubernetes.io/projected/548bbe15-d06a-4ea3-8d65-e0a726a23b06-kube-api-access-lbtpt\") pod \"cert-manager-cainjector-5545bd876-rz64v\" (UID: \"548bbe15-d06a-4ea3-8d65-e0a726a23b06\") " pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.814070 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/548bbe15-d06a-4ea3-8d65-e0a726a23b06-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-rz64v\" (UID: \"548bbe15-d06a-4ea3-8d65-e0a726a23b06\") " pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:33 crc kubenswrapper[4861]: I0219 13:25:33.830650 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" Feb 19 13:25:34 crc kubenswrapper[4861]: I0219 13:25:34.276342 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-rz64v"] Feb 19 13:25:34 crc kubenswrapper[4861]: I0219 13:25:34.690125 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" event={"ID":"548bbe15-d06a-4ea3-8d65-e0a726a23b06","Type":"ContainerStarted","Data":"aaf883a3b841dbe73f93926893f69a54b1619596e10bc7fdc837543212513e2b"} Feb 19 13:25:36 crc kubenswrapper[4861]: I0219 13:25:36.704705 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" event={"ID":"548bbe15-d06a-4ea3-8d65-e0a726a23b06","Type":"ContainerStarted","Data":"88592e53d8b79741af66ac5e9f5ee599964629fda845c0cea54d7a313ec72514"} Feb 19 13:25:36 crc kubenswrapper[4861]: I0219 13:25:36.705984 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" event={"ID":"cdce8567-cfd8-4edd-b947-963540559559","Type":"ContainerStarted","Data":"24affcbc4eaa49d8e879a54bdbf20fcd9b7e0f1a1934f8d4310658e8cdd66715"} Feb 19 13:25:36 crc kubenswrapper[4861]: I0219 13:25:36.706041 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:36 crc kubenswrapper[4861]: I0219 13:25:36.724252 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-rz64v" podStartSLOduration=1.744927755 podStartE2EDuration="3.724227235s" podCreationTimestamp="2026-02-19 13:25:33 +0000 UTC" firstStartedPulling="2026-02-19 13:25:34.290218242 +0000 UTC m=+948.951321470" lastFinishedPulling="2026-02-19 13:25:36.269517712 +0000 UTC m=+950.930620950" observedRunningTime="2026-02-19 13:25:36.721228893 +0000 UTC m=+951.382332131" watchObservedRunningTime="2026-02-19 13:25:36.724227235 +0000 UTC m=+951.385330473" Feb 19 13:25:36 crc kubenswrapper[4861]: I0219 13:25:36.752877 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" podStartSLOduration=1.670998461 podStartE2EDuration="6.752855994s" podCreationTimestamp="2026-02-19 13:25:30 +0000 UTC" firstStartedPulling="2026-02-19 13:25:31.206375435 +0000 UTC m=+945.867478703" lastFinishedPulling="2026-02-19 13:25:36.288232988 +0000 UTC m=+950.949336236" observedRunningTime="2026-02-19 13:25:36.751816256 +0000 UTC m=+951.412919514" watchObservedRunningTime="2026-02-19 13:25:36.752855994 +0000 UTC m=+951.413959232" Feb 19 13:25:45 crc kubenswrapper[4861]: I0219 13:25:45.744864 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-2djr8" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.394247 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-nrprl"] Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.395499 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.397912 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qxd2z" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.406030 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-nrprl"] Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.525111 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a39587a0-7ca9-47eb-8006-1c90e880d712-bound-sa-token\") pod \"cert-manager-545d4d4674-nrprl\" (UID: \"a39587a0-7ca9-47eb-8006-1c90e880d712\") " pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.525545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnww\" (UniqueName: \"kubernetes.io/projected/a39587a0-7ca9-47eb-8006-1c90e880d712-kube-api-access-wtnww\") pod \"cert-manager-545d4d4674-nrprl\" (UID: \"a39587a0-7ca9-47eb-8006-1c90e880d712\") " pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.627046 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnww\" (UniqueName: \"kubernetes.io/projected/a39587a0-7ca9-47eb-8006-1c90e880d712-kube-api-access-wtnww\") pod \"cert-manager-545d4d4674-nrprl\" (UID: \"a39587a0-7ca9-47eb-8006-1c90e880d712\") " pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.627286 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a39587a0-7ca9-47eb-8006-1c90e880d712-bound-sa-token\") pod \"cert-manager-545d4d4674-nrprl\" (UID: \"a39587a0-7ca9-47eb-8006-1c90e880d712\") " pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.660625 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a39587a0-7ca9-47eb-8006-1c90e880d712-bound-sa-token\") pod \"cert-manager-545d4d4674-nrprl\" (UID: \"a39587a0-7ca9-47eb-8006-1c90e880d712\") " pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.662118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnww\" (UniqueName: \"kubernetes.io/projected/a39587a0-7ca9-47eb-8006-1c90e880d712-kube-api-access-wtnww\") pod \"cert-manager-545d4d4674-nrprl\" (UID: \"a39587a0-7ca9-47eb-8006-1c90e880d712\") " pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:49 crc kubenswrapper[4861]: I0219 13:25:49.711302 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-nrprl" Feb 19 13:25:50 crc kubenswrapper[4861]: I0219 13:25:50.156236 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-nrprl"] Feb 19 13:25:50 crc kubenswrapper[4861]: I0219 13:25:50.839614 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-nrprl" event={"ID":"a39587a0-7ca9-47eb-8006-1c90e880d712","Type":"ContainerStarted","Data":"b012ac889a2fddfd7fd5564d175e4e7ab230f6190ad5dca50ab96b912d4fb23c"} Feb 19 13:25:50 crc kubenswrapper[4861]: I0219 13:25:50.839664 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-nrprl" event={"ID":"a39587a0-7ca9-47eb-8006-1c90e880d712","Type":"ContainerStarted","Data":"6dfcf5933b301429e3986791956edcf5fc4b01982b63055e5219d1ba1c843282"} Feb 19 13:25:50 crc kubenswrapper[4861]: I0219 13:25:50.870475 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-nrprl" podStartSLOduration=1.87045059 podStartE2EDuration="1.87045059s" podCreationTimestamp="2026-02-19 13:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:25:50.864108166 +0000 UTC m=+965.525211434" watchObservedRunningTime="2026-02-19 13:25:50.87045059 +0000 UTC m=+965.531553848" Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.484170 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rncxj"] Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.486061 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.492054 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.492104 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8fskt" Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.493804 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.508653 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rncxj"] Feb 19 13:25:59 crc kubenswrapper[4861]: I0219 13:25:59.991193 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nczck\" (UniqueName: \"kubernetes.io/projected/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c-kube-api-access-nczck\") pod \"openstack-operator-index-rncxj\" (UID: \"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c\") " pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:26:00 crc kubenswrapper[4861]: I0219 13:26:00.093921 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nczck\" (UniqueName: \"kubernetes.io/projected/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c-kube-api-access-nczck\") pod \"openstack-operator-index-rncxj\" (UID: \"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c\") " pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:26:00 crc kubenswrapper[4861]: I0219 13:26:00.119106 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nczck\" (UniqueName: \"kubernetes.io/projected/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c-kube-api-access-nczck\") pod \"openstack-operator-index-rncxj\" (UID: \"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c\") " pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:26:00 crc kubenswrapper[4861]: I0219 13:26:00.409405 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:26:00 crc kubenswrapper[4861]: I0219 13:26:00.668651 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rncxj"] Feb 19 13:26:01 crc kubenswrapper[4861]: I0219 13:26:01.008708 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rncxj" event={"ID":"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c","Type":"ContainerStarted","Data":"89805993f4daffa728cec693c26343d8b1f3b8ae02ebfc5025f3fd03f74095e1"} Feb 19 13:26:02 crc kubenswrapper[4861]: I0219 13:26:02.020131 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rncxj" event={"ID":"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c","Type":"ContainerStarted","Data":"d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911"} Feb 19 13:26:02 crc kubenswrapper[4861]: I0219 13:26:02.047565 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rncxj" podStartSLOduration=2.105386631 podStartE2EDuration="3.047528671s" podCreationTimestamp="2026-02-19 13:25:59 +0000 UTC" firstStartedPulling="2026-02-19 13:26:00.680265344 +0000 UTC m=+975.341368572" lastFinishedPulling="2026-02-19 13:26:01.622407384 +0000 UTC m=+976.283510612" observedRunningTime="2026-02-19 13:26:02.039580731 +0000 UTC m=+976.700683959" watchObservedRunningTime="2026-02-19 13:26:02.047528671 +0000 UTC m=+976.708631969" Feb 19 13:26:02 crc kubenswrapper[4861]: I0219 13:26:02.856331 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rncxj"] Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.462890 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tzrm7"] Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.465250 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.473929 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tzrm7"] Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.650406 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6kq\" (UniqueName: \"kubernetes.io/projected/ccbc418a-bf0d-4803-8555-e9d236c68686-kube-api-access-9r6kq\") pod \"openstack-operator-index-tzrm7\" (UID: \"ccbc418a-bf0d-4803-8555-e9d236c68686\") " pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.751887 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6kq\" (UniqueName: \"kubernetes.io/projected/ccbc418a-bf0d-4803-8555-e9d236c68686-kube-api-access-9r6kq\") pod \"openstack-operator-index-tzrm7\" (UID: \"ccbc418a-bf0d-4803-8555-e9d236c68686\") " pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.784922 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6kq\" (UniqueName: \"kubernetes.io/projected/ccbc418a-bf0d-4803-8555-e9d236c68686-kube-api-access-9r6kq\") pod \"openstack-operator-index-tzrm7\" (UID: \"ccbc418a-bf0d-4803-8555-e9d236c68686\") " pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:03 crc kubenswrapper[4861]: I0219 13:26:03.798779 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:04 crc kubenswrapper[4861]: I0219 13:26:04.036796 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rncxj" podUID="6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" containerName="registry-server" containerID="cri-o://d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911" gracePeriod=2 Feb 19 13:26:04 crc kubenswrapper[4861]: I0219 13:26:04.121369 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tzrm7"] Feb 19 13:26:04 crc kubenswrapper[4861]: I0219 13:26:04.430861 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:26:04 crc kubenswrapper[4861]: I0219 13:26:04.466760 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nczck\" (UniqueName: \"kubernetes.io/projected/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c-kube-api-access-nczck\") pod \"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c\" (UID: \"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c\") " Feb 19 13:26:04 crc kubenswrapper[4861]: I0219 13:26:04.472177 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c-kube-api-access-nczck" (OuterVolumeSpecName: "kube-api-access-nczck") pod "6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" (UID: "6df88a42-e33b-4fa4-90d0-a27f8fd47b7c"). InnerVolumeSpecName "kube-api-access-nczck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:26:04 crc kubenswrapper[4861]: I0219 13:26:04.568011 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nczck\" (UniqueName: \"kubernetes.io/projected/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c-kube-api-access-nczck\") on node \"crc\" DevicePath \"\"" Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.047060 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tzrm7" event={"ID":"ccbc418a-bf0d-4803-8555-e9d236c68686","Type":"ContainerStarted","Data":"af0c98ddf4e6a5f6b60401c05bf142f8c67fdf852cd96df90558d25a093ccace"} Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.047477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tzrm7" event={"ID":"ccbc418a-bf0d-4803-8555-e9d236c68686","Type":"ContainerStarted","Data":"691bcd2d0f951ba85bfba87400906f822b19e83467d2f0036081a6fe7e19d187"} Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.049729 4861 generic.go:334] "Generic (PLEG): container finished" podID="6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" containerID="d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911" exitCode=0 Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.049791 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rncxj" event={"ID":"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c","Type":"ContainerDied","Data":"d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911"} Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.049803 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rncxj" Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.049831 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rncxj" event={"ID":"6df88a42-e33b-4fa4-90d0-a27f8fd47b7c","Type":"ContainerDied","Data":"89805993f4daffa728cec693c26343d8b1f3b8ae02ebfc5025f3fd03f74095e1"} Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.049863 4861 scope.go:117] "RemoveContainer" containerID="d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911" Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.077409 4861 scope.go:117] "RemoveContainer" containerID="d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911" Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.081551 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tzrm7" podStartSLOduration=1.5978726509999999 podStartE2EDuration="2.081527904s" podCreationTimestamp="2026-02-19 13:26:03 +0000 UTC" firstStartedPulling="2026-02-19 13:26:04.143811636 +0000 UTC m=+978.804914864" lastFinishedPulling="2026-02-19 13:26:04.627466869 +0000 UTC m=+979.288570117" observedRunningTime="2026-02-19 13:26:05.077971495 +0000 UTC m=+979.739074763" watchObservedRunningTime="2026-02-19 13:26:05.081527904 +0000 UTC m=+979.742631152" Feb 19 13:26:05 crc kubenswrapper[4861]: E0219 13:26:05.081672 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911\": container with ID starting with d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911 not found: ID does not exist" containerID="d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911" Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.081843 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911"} err="failed to get container status \"d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911\": rpc error: code = NotFound desc = could not find container \"d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911\": container with ID starting with d480fad8768e0c3723c3856764a6a1bafa9eda7ceabc99d434660dccf1d99911 not found: ID does not exist" Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.124547 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rncxj"] Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.128718 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rncxj"] Feb 19 13:26:05 crc kubenswrapper[4861]: I0219 13:26:05.991315 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" path="/var/lib/kubelet/pods/6df88a42-e33b-4fa4-90d0-a27f8fd47b7c/volumes" Feb 19 13:26:13 crc kubenswrapper[4861]: I0219 13:26:13.799221 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:13 crc kubenswrapper[4861]: I0219 13:26:13.800273 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:13 crc kubenswrapper[4861]: I0219 13:26:13.848690 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:14 crc kubenswrapper[4861]: I0219 13:26:14.168716 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tzrm7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.720393 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7"] Feb 19 13:26:16 crc kubenswrapper[4861]: E0219 13:26:16.721001 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" containerName="registry-server" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.721018 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" containerName="registry-server" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.721152 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df88a42-e33b-4fa4-90d0-a27f8fd47b7c" containerName="registry-server" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.722260 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.725376 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-m66jt" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.733337 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7"] Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.763488 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8pj\" (UniqueName: \"kubernetes.io/projected/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-kube-api-access-tm8pj\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.763658 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.763728 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.865983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm8pj\" (UniqueName: \"kubernetes.io/projected/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-kube-api-access-tm8pj\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.866170 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.866208 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.866743 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.867106 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:16 crc kubenswrapper[4861]: I0219 13:26:16.896812 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm8pj\" (UniqueName: \"kubernetes.io/projected/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-kube-api-access-tm8pj\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:17 crc kubenswrapper[4861]: I0219 13:26:17.045597 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:17 crc kubenswrapper[4861]: I0219 13:26:17.506224 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7"] Feb 19 13:26:18 crc kubenswrapper[4861]: I0219 13:26:18.157548 4861 generic.go:334] "Generic (PLEG): container finished" podID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerID="1b3eade4b3a1a6134ebb588d6fbbcfd1cb53730e532052976614e9bd8725e1da" exitCode=0 Feb 19 13:26:18 crc kubenswrapper[4861]: I0219 13:26:18.157924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" event={"ID":"07e95b41-53d8-4df5-9d1c-f12acaeea9ea","Type":"ContainerDied","Data":"1b3eade4b3a1a6134ebb588d6fbbcfd1cb53730e532052976614e9bd8725e1da"} Feb 19 13:26:18 crc kubenswrapper[4861]: I0219 13:26:18.158009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" event={"ID":"07e95b41-53d8-4df5-9d1c-f12acaeea9ea","Type":"ContainerStarted","Data":"46450f6239f90ac489ab04cf54f7fd16a0093b6493cdb4e978b66a6700275153"} Feb 19 13:26:20 crc kubenswrapper[4861]: I0219 13:26:20.187494 4861 generic.go:334] "Generic (PLEG): container finished" podID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerID="b8363615ec947abb15b13716b35e5c9c0ef0df3db78b4ba6bb8da21675d14781" exitCode=0 Feb 19 13:26:20 crc kubenswrapper[4861]: I0219 13:26:20.187638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" event={"ID":"07e95b41-53d8-4df5-9d1c-f12acaeea9ea","Type":"ContainerDied","Data":"b8363615ec947abb15b13716b35e5c9c0ef0df3db78b4ba6bb8da21675d14781"} Feb 19 13:26:21 crc kubenswrapper[4861]: I0219 13:26:21.197625 4861 generic.go:334] "Generic (PLEG): container finished" podID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerID="30c1fa59fac50f45fcd18ccebc0a792f33710ca1186145ed8e4d2c99905d0fa2" exitCode=0 Feb 19 13:26:21 crc kubenswrapper[4861]: I0219 13:26:21.197695 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" event={"ID":"07e95b41-53d8-4df5-9d1c-f12acaeea9ea","Type":"ContainerDied","Data":"30c1fa59fac50f45fcd18ccebc0a792f33710ca1186145ed8e4d2c99905d0fa2"} Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.511110 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.660997 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-bundle\") pod \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.661118 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm8pj\" (UniqueName: \"kubernetes.io/projected/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-kube-api-access-tm8pj\") pod \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.661157 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-util\") pod \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\" (UID: \"07e95b41-53d8-4df5-9d1c-f12acaeea9ea\") " Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.662566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-bundle" (OuterVolumeSpecName: "bundle") pod "07e95b41-53d8-4df5-9d1c-f12acaeea9ea" (UID: "07e95b41-53d8-4df5-9d1c-f12acaeea9ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.668073 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-kube-api-access-tm8pj" (OuterVolumeSpecName: "kube-api-access-tm8pj") pod "07e95b41-53d8-4df5-9d1c-f12acaeea9ea" (UID: "07e95b41-53d8-4df5-9d1c-f12acaeea9ea"). InnerVolumeSpecName "kube-api-access-tm8pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.703947 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-util" (OuterVolumeSpecName: "util") pod "07e95b41-53d8-4df5-9d1c-f12acaeea9ea" (UID: "07e95b41-53d8-4df5-9d1c-f12acaeea9ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.762629 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.762669 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm8pj\" (UniqueName: \"kubernetes.io/projected/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-kube-api-access-tm8pj\") on node \"crc\" DevicePath \"\"" Feb 19 13:26:22 crc kubenswrapper[4861]: I0219 13:26:22.762680 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07e95b41-53d8-4df5-9d1c-f12acaeea9ea-util\") on node \"crc\" DevicePath \"\"" Feb 19 13:26:23 crc kubenswrapper[4861]: I0219 13:26:23.216823 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" event={"ID":"07e95b41-53d8-4df5-9d1c-f12acaeea9ea","Type":"ContainerDied","Data":"46450f6239f90ac489ab04cf54f7fd16a0093b6493cdb4e978b66a6700275153"} Feb 19 13:26:23 crc kubenswrapper[4861]: I0219 13:26:23.216872 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46450f6239f90ac489ab04cf54f7fd16a0093b6493cdb4e978b66a6700275153" Feb 19 13:26:23 crc kubenswrapper[4861]: I0219 13:26:23.216907 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.186486 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp"] Feb 19 13:26:30 crc kubenswrapper[4861]: E0219 13:26:30.187344 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="util" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.187360 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="util" Feb 19 13:26:30 crc kubenswrapper[4861]: E0219 13:26:30.187383 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="extract" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.187391 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="extract" Feb 19 13:26:30 crc kubenswrapper[4861]: E0219 13:26:30.187413 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="pull" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.187494 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="pull" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.187722 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e95b41-53d8-4df5-9d1c-f12acaeea9ea" containerName="extract" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.188159 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.190888 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7w8mc" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.231859 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp"] Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.287719 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk59m\" (UniqueName: \"kubernetes.io/projected/c131e628-2531-4d18-8793-894b7b384b43-kube-api-access-pk59m\") pod \"openstack-operator-controller-init-6679bf9b57-tm5fp\" (UID: \"c131e628-2531-4d18-8793-894b7b384b43\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.388772 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk59m\" (UniqueName: \"kubernetes.io/projected/c131e628-2531-4d18-8793-894b7b384b43-kube-api-access-pk59m\") pod \"openstack-operator-controller-init-6679bf9b57-tm5fp\" (UID: \"c131e628-2531-4d18-8793-894b7b384b43\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.409402 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk59m\" (UniqueName: \"kubernetes.io/projected/c131e628-2531-4d18-8793-894b7b384b43-kube-api-access-pk59m\") pod \"openstack-operator-controller-init-6679bf9b57-tm5fp\" (UID: \"c131e628-2531-4d18-8793-894b7b384b43\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.512501 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:26:30 crc kubenswrapper[4861]: I0219 13:26:30.986738 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp"] Feb 19 13:26:31 crc kubenswrapper[4861]: I0219 13:26:31.267992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" event={"ID":"c131e628-2531-4d18-8793-894b7b384b43","Type":"ContainerStarted","Data":"59f76dd72759858b6f420fb48d48f41fdad74a1df3401d6c7ab00c365a0e42dd"} Feb 19 13:26:33 crc kubenswrapper[4861]: I0219 13:26:33.834266 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:26:33 crc kubenswrapper[4861]: I0219 13:26:33.834332 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:26:37 crc kubenswrapper[4861]: I0219 13:26:37.325906 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" event={"ID":"c131e628-2531-4d18-8793-894b7b384b43","Type":"ContainerStarted","Data":"072527e21d288569551e6c04faf7f662d2a8b2b62ed03f8af7b5dfb006ac2e18"} Feb 19 13:26:37 crc kubenswrapper[4861]: I0219 13:26:37.327553 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:26:37 crc kubenswrapper[4861]: I0219 13:26:37.370462 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" podStartSLOduration=1.73126954 podStartE2EDuration="7.370403205s" podCreationTimestamp="2026-02-19 13:26:30 +0000 UTC" firstStartedPulling="2026-02-19 13:26:31.001853709 +0000 UTC m=+1005.662956937" lastFinishedPulling="2026-02-19 13:26:36.640987374 +0000 UTC m=+1011.302090602" observedRunningTime="2026-02-19 13:26:37.367601168 +0000 UTC m=+1012.028704436" watchObservedRunningTime="2026-02-19 13:26:37.370403205 +0000 UTC m=+1012.031506473" Feb 19 13:26:50 crc kubenswrapper[4861]: I0219 13:26:50.518566 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-tm5fp" Feb 19 13:27:03 crc kubenswrapper[4861]: I0219 13:27:03.834156 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:27:03 crc kubenswrapper[4861]: I0219 13:27:03.835105 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.282197 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.283996 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.289443 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.297065 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jb4dm" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.336805 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.337579 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.339662 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-w8fts" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.356878 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.357729 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.360705 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cgf62" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.374881 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.399794 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.418563 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.419619 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.426814 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xltgb" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.456207 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmfbh\" (UniqueName: \"kubernetes.io/projected/a9582c92-ce65-4865-bbcf-57b8b3c7002c-kube-api-access-fmfbh\") pod \"cinder-operator-controller-manager-5d946d989d-nmcsj\" (UID: \"a9582c92-ce65-4865-bbcf-57b8b3c7002c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.456308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bklf\" (UniqueName: \"kubernetes.io/projected/2e56e0d7-2b43-4c87-912b-e91661077fcf-kube-api-access-2bklf\") pod \"barbican-operator-controller-manager-868647ff47-tx4wr\" (UID: \"2e56e0d7-2b43-4c87-912b-e91661077fcf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.458486 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.464130 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.464972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.469279 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-z6brb" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.487754 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.488476 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.488930 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.490331 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.493171 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.493321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6zfx5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.493538 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p56tl" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.496698 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.519046 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.557057 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmfbh\" (UniqueName: \"kubernetes.io/projected/a9582c92-ce65-4865-bbcf-57b8b3c7002c-kube-api-access-fmfbh\") pod \"cinder-operator-controller-manager-5d946d989d-nmcsj\" (UID: \"a9582c92-ce65-4865-bbcf-57b8b3c7002c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.557104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5hd\" (UniqueName: \"kubernetes.io/projected/a3459958-b0c6-41f2-afb6-0a9a15ca3837-kube-api-access-pg5hd\") pod \"glance-operator-controller-manager-77987464f4-hlnqx\" (UID: \"a3459958-b0c6-41f2-afb6-0a9a15ca3837\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.557148 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbzp\" (UniqueName: \"kubernetes.io/projected/dacd1beb-af59-4f30-8b76-ef41658bf9f4-kube-api-access-skbzp\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.557183 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsszq\" (UniqueName: \"kubernetes.io/projected/8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4-kube-api-access-xsszq\") pod \"designate-operator-controller-manager-6d8bf5c495-rgfz5\" (UID: \"8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.557214 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bklf\" (UniqueName: \"kubernetes.io/projected/2e56e0d7-2b43-4c87-912b-e91661077fcf-kube-api-access-2bklf\") pod \"barbican-operator-controller-manager-868647ff47-tx4wr\" (UID: \"2e56e0d7-2b43-4c87-912b-e91661077fcf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.557232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.568494 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.576992 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.580251 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.583980 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5vzng" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.584574 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.593846 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmfbh\" (UniqueName: \"kubernetes.io/projected/a9582c92-ce65-4865-bbcf-57b8b3c7002c-kube-api-access-fmfbh\") pod \"cinder-operator-controller-manager-5d946d989d-nmcsj\" (UID: \"a9582c92-ce65-4865-bbcf-57b8b3c7002c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.605538 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.606536 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.606753 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bklf\" (UniqueName: \"kubernetes.io/projected/2e56e0d7-2b43-4c87-912b-e91661077fcf-kube-api-access-2bklf\") pod \"barbican-operator-controller-manager-868647ff47-tx4wr\" (UID: \"2e56e0d7-2b43-4c87-912b-e91661077fcf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.608618 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.609429 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.610739 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xg5ql" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.611283 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5rdj4" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.615619 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.623498 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.632759 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.633595 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.638409 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bl22w" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.639590 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.640849 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.641968 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-h29hh" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.651132 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.662554 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663669 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj8j\" (UniqueName: \"kubernetes.io/projected/98d71c2d-33db-49a7-bb86-918858a91612-kube-api-access-5gj8j\") pod \"manila-operator-controller-manager-54f6768c69-g76jd\" (UID: \"98d71c2d-33db-49a7-bb86-918858a91612\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663704 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tnb\" (UniqueName: \"kubernetes.io/projected/4c04516b-4856-4f67-abf9-722af4a25ab6-kube-api-access-g5tnb\") pod \"mariadb-operator-controller-manager-6994f66f48-xtsr5\" (UID: \"4c04516b-4856-4f67-abf9-722af4a25ab6\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663744 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5hd\" (UniqueName: \"kubernetes.io/projected/a3459958-b0c6-41f2-afb6-0a9a15ca3837-kube-api-access-pg5hd\") pod \"glance-operator-controller-manager-77987464f4-hlnqx\" (UID: \"a3459958-b0c6-41f2-afb6-0a9a15ca3837\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663783 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbzp\" (UniqueName: \"kubernetes.io/projected/dacd1beb-af59-4f30-8b76-ef41658bf9f4-kube-api-access-skbzp\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663803 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txxb\" (UniqueName: \"kubernetes.io/projected/0bd94e11-4fa6-4d29-89a9-e2a493d94b89-kube-api-access-5txxb\") pod \"keystone-operator-controller-manager-b4d948c87-6q8nw\" (UID: \"0bd94e11-4fa6-4d29-89a9-e2a493d94b89\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsszq\" (UniqueName: \"kubernetes.io/projected/8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4-kube-api-access-xsszq\") pod \"designate-operator-controller-manager-6d8bf5c495-rgfz5\" (UID: \"8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663857 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9d9x\" (UniqueName: \"kubernetes.io/projected/4401cea1-fce7-4ec1-938b-2519cf2a5521-kube-api-access-d9d9x\") pod \"ironic-operator-controller-manager-554564d7fc-7lwn6\" (UID: \"4401cea1-fce7-4ec1-938b-2519cf2a5521\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663882 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjv6z\" (UniqueName: \"kubernetes.io/projected/127c1b36-40d1-434a-803b-21cc75d9b41a-kube-api-access-kjv6z\") pod \"heat-operator-controller-manager-69f49c598c-b24sf\" (UID: \"127c1b36-40d1-434a-803b-21cc75d9b41a\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663901 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllm7\" (UniqueName: \"kubernetes.io/projected/4e0aba21-f157-4cdc-8b37-b043ed6298c7-kube-api-access-wllm7\") pod \"neutron-operator-controller-manager-64ddbf8bb-x8xxz\" (UID: \"4e0aba21-f157-4cdc-8b37-b043ed6298c7\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663920 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663949 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwd6\" (UniqueName: \"kubernetes.io/projected/49dd31ac-b688-453a-9701-001ce3063ea7-kube-api-access-rcwd6\") pod \"horizon-operator-controller-manager-5b9b8895d5-g942s\" (UID: \"49dd31ac-b688-453a-9701-001ce3063ea7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.663992 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:10 crc kubenswrapper[4861]: E0219 13:27:10.664549 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:10 crc kubenswrapper[4861]: E0219 13:27:10.664597 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert podName:dacd1beb-af59-4f30-8b76-ef41658bf9f4 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:11.164581527 +0000 UTC m=+1045.825684755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert") pod "infra-operator-controller-manager-79d975b745-kp2bg" (UID: "dacd1beb-af59-4f30-8b76-ef41658bf9f4") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.671893 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.672802 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.673656 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.674157 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-j2d9j" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.684982 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.686532 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.690015 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mxldm" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.690516 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsszq\" (UniqueName: \"kubernetes.io/projected/8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4-kube-api-access-xsszq\") pod \"designate-operator-controller-manager-6d8bf5c495-rgfz5\" (UID: \"8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.707067 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5hd\" (UniqueName: \"kubernetes.io/projected/a3459958-b0c6-41f2-afb6-0a9a15ca3837-kube-api-access-pg5hd\") pod \"glance-operator-controller-manager-77987464f4-hlnqx\" (UID: \"a3459958-b0c6-41f2-afb6-0a9a15ca3837\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.712620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbzp\" (UniqueName: \"kubernetes.io/projected/dacd1beb-af59-4f30-8b76-ef41658bf9f4-kube-api-access-skbzp\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.730206 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.733074 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.741718 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.743746 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.746680 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.746828 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qs2d6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.747287 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.755408 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.766850 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.766877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9d9x\" (UniqueName: \"kubernetes.io/projected/4401cea1-fce7-4ec1-938b-2519cf2a5521-kube-api-access-d9d9x\") pod \"ironic-operator-controller-manager-554564d7fc-7lwn6\" (UID: \"4401cea1-fce7-4ec1-938b-2519cf2a5521\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.766914 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjv6z\" (UniqueName: \"kubernetes.io/projected/127c1b36-40d1-434a-803b-21cc75d9b41a-kube-api-access-kjv6z\") pod \"heat-operator-controller-manager-69f49c598c-b24sf\" (UID: \"127c1b36-40d1-434a-803b-21cc75d9b41a\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.766936 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllm7\" (UniqueName: \"kubernetes.io/projected/4e0aba21-f157-4cdc-8b37-b043ed6298c7-kube-api-access-wllm7\") pod \"neutron-operator-controller-manager-64ddbf8bb-x8xxz\" (UID: \"4e0aba21-f157-4cdc-8b37-b043ed6298c7\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.766942 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.766977 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwd6\" (UniqueName: \"kubernetes.io/projected/49dd31ac-b688-453a-9701-001ce3063ea7-kube-api-access-rcwd6\") pod \"horizon-operator-controller-manager-5b9b8895d5-g942s\" (UID: \"49dd31ac-b688-453a-9701-001ce3063ea7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.767000 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gj8j\" (UniqueName: \"kubernetes.io/projected/98d71c2d-33db-49a7-bb86-918858a91612-kube-api-access-5gj8j\") pod \"manila-operator-controller-manager-54f6768c69-g76jd\" (UID: \"98d71c2d-33db-49a7-bb86-918858a91612\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.767019 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tnb\" (UniqueName: \"kubernetes.io/projected/4c04516b-4856-4f67-abf9-722af4a25ab6-kube-api-access-g5tnb\") pod \"mariadb-operator-controller-manager-6994f66f48-xtsr5\" (UID: \"4c04516b-4856-4f67-abf9-722af4a25ab6\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.767064 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txxb\" (UniqueName: \"kubernetes.io/projected/0bd94e11-4fa6-4d29-89a9-e2a493d94b89-kube-api-access-5txxb\") pod \"keystone-operator-controller-manager-b4d948c87-6q8nw\" (UID: \"0bd94e11-4fa6-4d29-89a9-e2a493d94b89\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.771538 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.783545 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qnsc9" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.784313 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.799573 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gj8j\" (UniqueName: \"kubernetes.io/projected/98d71c2d-33db-49a7-bb86-918858a91612-kube-api-access-5gj8j\") pod \"manila-operator-controller-manager-54f6768c69-g76jd\" (UID: \"98d71c2d-33db-49a7-bb86-918858a91612\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.799692 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllm7\" (UniqueName: \"kubernetes.io/projected/4e0aba21-f157-4cdc-8b37-b043ed6298c7-kube-api-access-wllm7\") pod \"neutron-operator-controller-manager-64ddbf8bb-x8xxz\" (UID: \"4e0aba21-f157-4cdc-8b37-b043ed6298c7\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.801298 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjv6z\" (UniqueName: \"kubernetes.io/projected/127c1b36-40d1-434a-803b-21cc75d9b41a-kube-api-access-kjv6z\") pod \"heat-operator-controller-manager-69f49c598c-b24sf\" (UID: \"127c1b36-40d1-434a-803b-21cc75d9b41a\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.802232 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9d9x\" (UniqueName: \"kubernetes.io/projected/4401cea1-fce7-4ec1-938b-2519cf2a5521-kube-api-access-d9d9x\") pod \"ironic-operator-controller-manager-554564d7fc-7lwn6\" (UID: \"4401cea1-fce7-4ec1-938b-2519cf2a5521\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.802903 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tnb\" (UniqueName: \"kubernetes.io/projected/4c04516b-4856-4f67-abf9-722af4a25ab6-kube-api-access-g5tnb\") pod \"mariadb-operator-controller-manager-6994f66f48-xtsr5\" (UID: \"4c04516b-4856-4f67-abf9-722af4a25ab6\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.805434 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.807461 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-2nnv6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.807710 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txxb\" (UniqueName: \"kubernetes.io/projected/0bd94e11-4fa6-4d29-89a9-e2a493d94b89-kube-api-access-5txxb\") pod \"keystone-operator-controller-manager-b4d948c87-6q8nw\" (UID: \"0bd94e11-4fa6-4d29-89a9-e2a493d94b89\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.808222 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.812323 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.816951 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwd6\" (UniqueName: \"kubernetes.io/projected/49dd31ac-b688-453a-9701-001ce3063ea7-kube-api-access-rcwd6\") pod \"horizon-operator-controller-manager-5b9b8895d5-g942s\" (UID: \"49dd31ac-b688-453a-9701-001ce3063ea7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.818037 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9hmd4" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.819580 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.823481 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.847797 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.848639 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.850267 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zjbd5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.860413 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.866224 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869620 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzmfq\" (UniqueName: \"kubernetes.io/projected/52fdb95f-0a68-4e4b-b205-06b492232999-kube-api-access-zzmfq\") pod \"nova-operator-controller-manager-567668f5cf-lgnlp\" (UID: \"52fdb95f-0a68-4e4b-b205-06b492232999\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869673 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnx7\" (UniqueName: \"kubernetes.io/projected/b62cf279-7b44-4aae-9417-4a9230a62e5e-kube-api-access-wtnx7\") pod \"ovn-operator-controller-manager-d44cf6b75-mlk9q\" (UID: \"b62cf279-7b44-4aae-9417-4a9230a62e5e\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869704 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rdd\" (UniqueName: \"kubernetes.io/projected/cc6aead1-61fb-403f-9388-81c8d84a0588-kube-api-access-l9rdd\") pod \"placement-operator-controller-manager-8497b45c89-qvc2h\" (UID: \"cc6aead1-61fb-403f-9388-81c8d84a0588\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869721 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8s7\" (UniqueName: \"kubernetes.io/projected/2a1a19ba-9308-4f92-97af-210cfbd20e18-kube-api-access-kc8s7\") pod \"swift-operator-controller-manager-68f46476f-8nbvg\" (UID: \"2a1a19ba-9308-4f92-97af-210cfbd20e18\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869748 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmsl\" (UniqueName: \"kubernetes.io/projected/9e8dc669-82a2-4d0a-bed3-7cb633ed2692-kube-api-access-csmsl\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jn5fv\" (UID: \"9e8dc669-82a2-4d0a-bed3-7cb633ed2692\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869770 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxlf7\" (UniqueName: \"kubernetes.io/projected/40169e6a-2e88-4d48-8ca9-8153ae9a109b-kube-api-access-lxlf7\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dms4s\" (UniqueName: \"kubernetes.io/projected/db3d27d2-0a91-4534-b995-3e42bdf891ab-kube-api-access-dms4s\") pod \"octavia-operator-controller-manager-69f8888797-x8bv6\" (UID: \"db3d27d2-0a91-4534-b995-3e42bdf891ab\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.869917 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.899043 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-f457f"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.899846 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.903384 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-f457f"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.905585 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-d6d6c" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.939990 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.952315 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.962700 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.963555 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.967749 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rzdwx" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.970287 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g"] Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dms4s\" (UniqueName: \"kubernetes.io/projected/db3d27d2-0a91-4534-b995-3e42bdf891ab-kube-api-access-dms4s\") pod \"octavia-operator-controller-manager-69f8888797-x8bv6\" (UID: \"db3d27d2-0a91-4534-b995-3e42bdf891ab\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971475 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971553 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzmfq\" (UniqueName: \"kubernetes.io/projected/52fdb95f-0a68-4e4b-b205-06b492232999-kube-api-access-zzmfq\") pod \"nova-operator-controller-manager-567668f5cf-lgnlp\" (UID: \"52fdb95f-0a68-4e4b-b205-06b492232999\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971643 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnx7\" (UniqueName: \"kubernetes.io/projected/b62cf279-7b44-4aae-9417-4a9230a62e5e-kube-api-access-wtnx7\") pod \"ovn-operator-controller-manager-d44cf6b75-mlk9q\" (UID: \"b62cf279-7b44-4aae-9417-4a9230a62e5e\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971707 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rdd\" (UniqueName: \"kubernetes.io/projected/cc6aead1-61fb-403f-9388-81c8d84a0588-kube-api-access-l9rdd\") pod \"placement-operator-controller-manager-8497b45c89-qvc2h\" (UID: \"cc6aead1-61fb-403f-9388-81c8d84a0588\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:10 crc kubenswrapper[4861]: E0219 13:27:10.971712 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:10 crc kubenswrapper[4861]: E0219 13:27:10.971760 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert podName:40169e6a-2e88-4d48-8ca9-8153ae9a109b nodeName:}" failed. No retries permitted until 2026-02-19 13:27:11.47174582 +0000 UTC m=+1046.132849048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" (UID: "40169e6a-2e88-4d48-8ca9-8153ae9a109b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971756 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8s7\" (UniqueName: \"kubernetes.io/projected/2a1a19ba-9308-4f92-97af-210cfbd20e18-kube-api-access-kc8s7\") pod \"swift-operator-controller-manager-68f46476f-8nbvg\" (UID: \"2a1a19ba-9308-4f92-97af-210cfbd20e18\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmsl\" (UniqueName: \"kubernetes.io/projected/9e8dc669-82a2-4d0a-bed3-7cb633ed2692-kube-api-access-csmsl\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jn5fv\" (UID: \"9e8dc669-82a2-4d0a-bed3-7cb633ed2692\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.971829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlf7\" (UniqueName: \"kubernetes.io/projected/40169e6a-2e88-4d48-8ca9-8153ae9a109b-kube-api-access-lxlf7\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.988896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dms4s\" (UniqueName: \"kubernetes.io/projected/db3d27d2-0a91-4534-b995-3e42bdf891ab-kube-api-access-dms4s\") pod \"octavia-operator-controller-manager-69f8888797-x8bv6\" (UID: \"db3d27d2-0a91-4534-b995-3e42bdf891ab\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.994337 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.994935 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:10 crc kubenswrapper[4861]: I0219 13:27:10.998242 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnx7\" (UniqueName: \"kubernetes.io/projected/b62cf279-7b44-4aae-9417-4a9230a62e5e-kube-api-access-wtnx7\") pod \"ovn-operator-controller-manager-d44cf6b75-mlk9q\" (UID: \"b62cf279-7b44-4aae-9417-4a9230a62e5e\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:10.999945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8s7\" (UniqueName: \"kubernetes.io/projected/2a1a19ba-9308-4f92-97af-210cfbd20e18-kube-api-access-kc8s7\") pod \"swift-operator-controller-manager-68f46476f-8nbvg\" (UID: \"2a1a19ba-9308-4f92-97af-210cfbd20e18\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.000560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxlf7\" (UniqueName: \"kubernetes.io/projected/40169e6a-2e88-4d48-8ca9-8153ae9a109b-kube-api-access-lxlf7\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.012779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzmfq\" (UniqueName: \"kubernetes.io/projected/52fdb95f-0a68-4e4b-b205-06b492232999-kube-api-access-zzmfq\") pod \"nova-operator-controller-manager-567668f5cf-lgnlp\" (UID: \"52fdb95f-0a68-4e4b-b205-06b492232999\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.013128 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rdd\" (UniqueName: \"kubernetes.io/projected/cc6aead1-61fb-403f-9388-81c8d84a0588-kube-api-access-l9rdd\") pod \"placement-operator-controller-manager-8497b45c89-qvc2h\" (UID: \"cc6aead1-61fb-403f-9388-81c8d84a0588\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.017188 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmsl\" (UniqueName: \"kubernetes.io/projected/9e8dc669-82a2-4d0a-bed3-7cb633ed2692-kube-api-access-csmsl\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jn5fv\" (UID: \"9e8dc669-82a2-4d0a-bed3-7cb633ed2692\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.024963 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.025816 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.032331 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.033122 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.041631 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dqr4w" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.042224 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.068640 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.070933 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.074916 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mwb\" (UniqueName: \"kubernetes.io/projected/b6fc0d09-ceeb-4f62-8dcd-277cd8f27371-kube-api-access-p6mwb\") pod \"test-operator-controller-manager-7866795846-f457f\" (UID: \"b6fc0d09-ceeb-4f62-8dcd-277cd8f27371\") " pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.075057 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxzw\" (UniqueName: \"kubernetes.io/projected/729ddab6-f042-425d-aa39-2d18efc216d6-kube-api-access-6nxzw\") pod \"watcher-operator-controller-manager-5db88f68c-wvd7g\" (UID: \"729ddab6-f042-425d-aa39-2d18efc216d6\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.106053 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.106770 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.127402 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.167212 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.176615 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.176679 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhzx\" (UniqueName: \"kubernetes.io/projected/716d511a-dfec-4b60-b963-8cd3f03b6e43-kube-api-access-gjhzx\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.176712 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.176778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.176824 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxzw\" (UniqueName: \"kubernetes.io/projected/729ddab6-f042-425d-aa39-2d18efc216d6-kube-api-access-6nxzw\") pod \"watcher-operator-controller-manager-5db88f68c-wvd7g\" (UID: \"729ddab6-f042-425d-aa39-2d18efc216d6\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.176900 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mwb\" (UniqueName: \"kubernetes.io/projected/b6fc0d09-ceeb-4f62-8dcd-277cd8f27371-kube-api-access-p6mwb\") pod \"test-operator-controller-manager-7866795846-f457f\" (UID: \"b6fc0d09-ceeb-4f62-8dcd-277cd8f27371\") " pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.177450 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.177510 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert podName:dacd1beb-af59-4f30-8b76-ef41658bf9f4 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:12.177495375 +0000 UTC m=+1046.838598603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert") pod "infra-operator-controller-manager-79d975b745-kp2bg" (UID: "dacd1beb-af59-4f30-8b76-ef41658bf9f4") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.192597 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.195753 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.196124 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mwb\" (UniqueName: \"kubernetes.io/projected/b6fc0d09-ceeb-4f62-8dcd-277cd8f27371-kube-api-access-p6mwb\") pod \"test-operator-controller-manager-7866795846-f457f\" (UID: \"b6fc0d09-ceeb-4f62-8dcd-277cd8f27371\") " pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.197178 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.199152 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-q56vk" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.200987 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxzw\" (UniqueName: \"kubernetes.io/projected/729ddab6-f042-425d-aa39-2d18efc216d6-kube-api-access-6nxzw\") pod \"watcher-operator-controller-manager-5db88f68c-wvd7g\" (UID: \"729ddab6-f042-425d-aa39-2d18efc216d6\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.206872 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.216332 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.231563 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.258648 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.271851 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.280497 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.280551 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjhzx\" (UniqueName: \"kubernetes.io/projected/716d511a-dfec-4b60-b963-8cd3f03b6e43-kube-api-access-gjhzx\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.280574 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.281380 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.281514 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:11.781414281 +0000 UTC m=+1046.442517509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.281765 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.281789 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:11.781782101 +0000 UTC m=+1046.442885329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "metrics-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.281963 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.308475 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjhzx\" (UniqueName: \"kubernetes.io/projected/716d511a-dfec-4b60-b963-8cd3f03b6e43-kube-api-access-gjhzx\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.339339 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.371997 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.383895 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbhq\" (UniqueName: \"kubernetes.io/projected/430ed7f7-365f-4636-86ce-d257a9203395-kube-api-access-5sbhq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k4k92\" (UID: \"430ed7f7-365f-4636-86ce-d257a9203395\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" Feb 19 13:27:11 crc kubenswrapper[4861]: W0219 13:27:11.399447 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9582c92_ce65_4865_bbcf_57b8b3c7002c.slice/crio-32cffdd119c1ac7b09973a3f3ade7549cbfcbc635585cb30d6334c261b451908 WatchSource:0}: Error finding container 32cffdd119c1ac7b09973a3f3ade7549cbfcbc635585cb30d6334c261b451908: Status 404 returned error can't find the container with id 32cffdd119c1ac7b09973a3f3ade7549cbfcbc635585cb30d6334c261b451908 Feb 19 13:27:11 crc kubenswrapper[4861]: W0219 13:27:11.416777 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3459958_b0c6_41f2_afb6_0a9a15ca3837.slice/crio-c55be8441079ae52c18d93a85ab927c1736a86dd4f2bc69f93b41342f80c4938 WatchSource:0}: Error finding container c55be8441079ae52c18d93a85ab927c1736a86dd4f2bc69f93b41342f80c4938: Status 404 returned error can't find the container with id c55be8441079ae52c18d93a85ab927c1736a86dd4f2bc69f93b41342f80c4938 Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.486483 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbhq\" (UniqueName: \"kubernetes.io/projected/430ed7f7-365f-4636-86ce-d257a9203395-kube-api-access-5sbhq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k4k92\" (UID: \"430ed7f7-365f-4636-86ce-d257a9203395\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.486548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.486765 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.486829 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert podName:40169e6a-2e88-4d48-8ca9-8153ae9a109b nodeName:}" failed. No retries permitted until 2026-02-19 13:27:12.486810726 +0000 UTC m=+1047.147913954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" (UID: "40169e6a-2e88-4d48-8ca9-8153ae9a109b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.509353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbhq\" (UniqueName: \"kubernetes.io/projected/430ed7f7-365f-4636-86ce-d257a9203395-kube-api-access-5sbhq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k4k92\" (UID: \"430ed7f7-365f-4636-86ce-d257a9203395\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.567572 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.651740 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.660341 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.667267 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd"] Feb 19 13:27:11 crc kubenswrapper[4861]: W0219 13:27:11.701915 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98d71c2d_33db_49a7_bb86_918858a91612.slice/crio-b360ac5a80ea5a62a892c0bae2202285e88e25d9fd3af073fdd2b36814c1da7c WatchSource:0}: Error finding container b360ac5a80ea5a62a892c0bae2202285e88e25d9fd3af073fdd2b36814c1da7c: Status 404 returned error can't find the container with id b360ac5a80ea5a62a892c0bae2202285e88e25d9fd3af073fdd2b36814c1da7c Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.706638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" event={"ID":"a3459958-b0c6-41f2-afb6-0a9a15ca3837","Type":"ContainerStarted","Data":"c55be8441079ae52c18d93a85ab927c1736a86dd4f2bc69f93b41342f80c4938"} Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.722058 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" event={"ID":"4401cea1-fce7-4ec1-938b-2519cf2a5521","Type":"ContainerStarted","Data":"07d129aa0f0c570d664ce1a2a712b3b923296f203c659ccb4d12fdc755dcd6cc"} Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.734239 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" event={"ID":"a9582c92-ce65-4865-bbcf-57b8b3c7002c","Type":"ContainerStarted","Data":"32cffdd119c1ac7b09973a3f3ade7549cbfcbc635585cb30d6334c261b451908"} Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.739033 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" event={"ID":"2e56e0d7-2b43-4c87-912b-e91661077fcf","Type":"ContainerStarted","Data":"f7c7155f0c87cd6fb63015d5e5e3c534b568f64cd4f039aa42adc656c6d9fde8"} Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.741934 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" event={"ID":"49dd31ac-b688-453a-9701-001ce3063ea7","Type":"ContainerStarted","Data":"62417e120edd45e00e71ba5d7b9d32125dd231f95704d18f979863d747428622"} Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.789976 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.790030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.790246 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.790300 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:12.79028306 +0000 UTC m=+1047.451386288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "webhook-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.790597 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: E0219 13:27:11.790686 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:12.79066507 +0000 UTC m=+1047.451768298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "metrics-server-cert" not found Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.820677 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.829412 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz"] Feb 19 13:27:11 crc kubenswrapper[4861]: W0219 13:27:11.838409 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd94e11_4fa6_4d29_89a9_e2a493d94b89.slice/crio-4c718cb994584e5e4d9af0c54ce0fc8b5de35ee7ff23373c27172ac249100402 WatchSource:0}: Error finding container 4c718cb994584e5e4d9af0c54ce0fc8b5de35ee7ff23373c27172ac249100402: Status 404 returned error can't find the container with id 4c718cb994584e5e4d9af0c54ce0fc8b5de35ee7ff23373c27172ac249100402 Feb 19 13:27:11 crc kubenswrapper[4861]: W0219 13:27:11.843651 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e0aba21_f157_4cdc_8b37_b043ed6298c7.slice/crio-ee3d25f831d62c7472554550976d7587af2128de7fd41d7eec4cbe9b2f686081 WatchSource:0}: Error finding container ee3d25f831d62c7472554550976d7587af2128de7fd41d7eec4cbe9b2f686081: Status 404 returned error can't find the container with id ee3d25f831d62c7472554550976d7587af2128de7fd41d7eec4cbe9b2f686081 Feb 19 13:27:11 crc kubenswrapper[4861]: W0219 13:27:11.854179 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9ba3dc_beae_4d3b_8d8d_d595eb7c1ed4.slice/crio-205ee30e11d1d091bacf4fdb8cabbb9b7cbd5b6a9e3f5911ad5f8a59cb08816f WatchSource:0}: Error finding container 205ee30e11d1d091bacf4fdb8cabbb9b7cbd5b6a9e3f5911ad5f8a59cb08816f: Status 404 returned error can't find the container with id 205ee30e11d1d091bacf4fdb8cabbb9b7cbd5b6a9e3f5911ad5f8a59cb08816f Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.856062 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.860722 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5"] Feb 19 13:27:11 crc kubenswrapper[4861]: I0219 13:27:11.865925 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf"] Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.020531 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp"] Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.034335 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52fdb95f_0a68_4e4b_b205_06b492232999.slice/crio-28a80dc2896eae4615198300a68b8e7b5c6340ddc5f59b6e2803e08457a1b0cd WatchSource:0}: Error finding container 28a80dc2896eae4615198300a68b8e7b5c6340ddc5f59b6e2803e08457a1b0cd: Status 404 returned error can't find the container with id 28a80dc2896eae4615198300a68b8e7b5c6340ddc5f59b6e2803e08457a1b0cd Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.040902 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-f457f"] Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.083348 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g"] Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.089488 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg"] Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.094551 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q"] Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.100192 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729ddab6_f042_425d_aa39_2d18efc216d6.slice/crio-f99f4eda9507dfc21b3d07596df884da3d0bd4598085549983f27c3e5ee132a7 WatchSource:0}: Error finding container f99f4eda9507dfc21b3d07596df884da3d0bd4598085549983f27c3e5ee132a7: Status 404 returned error can't find the container with id f99f4eda9507dfc21b3d07596df884da3d0bd4598085549983f27c3e5ee132a7 Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.103113 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nxzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-wvd7g_openstack-operators(729ddab6-f042-425d-aa39-2d18efc216d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.104194 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" podUID="729ddab6-f042-425d-aa39-2d18efc216d6" Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.108816 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb62cf279_7b44_4aae_9417_4a9230a62e5e.slice/crio-ed89f7ec5e8ddce503beb0ff2df1f82e0bcb8f02042674f276522b513d814e0c WatchSource:0}: Error finding container ed89f7ec5e8ddce503beb0ff2df1f82e0bcb8f02042674f276522b513d814e0c: Status 404 returned error can't find the container with id ed89f7ec5e8ddce503beb0ff2df1f82e0bcb8f02042674f276522b513d814e0c Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.111322 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtnx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-mlk9q_openstack-operators(b62cf279-7b44-4aae-9417-4a9230a62e5e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.112691 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" podUID="b62cf279-7b44-4aae-9417-4a9230a62e5e" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.114720 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv"] Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.126229 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8dc669_82a2_4d0a_bed3_7cb633ed2692.slice/crio-143f5bc1f7dddfac6c216431d0ee98dff113a9000cded0bdf9b32c760964ad7d WatchSource:0}: Error finding container 143f5bc1f7dddfac6c216431d0ee98dff113a9000cded0bdf9b32c760964ad7d: Status 404 returned error can't find the container with id 143f5bc1f7dddfac6c216431d0ee98dff113a9000cded0bdf9b32c760964ad7d Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.128313 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csmsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-jn5fv_openstack-operators(9e8dc669-82a2-4d0a-bed3-7cb633ed2692): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.129757 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" podUID="9e8dc669-82a2-4d0a-bed3-7cb633ed2692" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.179135 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6"] Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.189540 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h"] Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.192005 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb3d27d2_0a91_4534_b995_3e42bdf891ab.slice/crio-d2f6fc952535505af3ae52a4bff531414f60128889c88330abde97918b0abf38 WatchSource:0}: Error finding container d2f6fc952535505af3ae52a4bff531414f60128889c88330abde97918b0abf38: Status 404 returned error can't find the container with id d2f6fc952535505af3ae52a4bff531414f60128889c88330abde97918b0abf38 Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.194308 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dms4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-x8bv6_openstack-operators(db3d27d2-0a91-4534-b995-3e42bdf891ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.195405 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" podUID="db3d27d2-0a91-4534-b995-3e42bdf891ab" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.195864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.196111 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.196172 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert podName:dacd1beb-af59-4f30-8b76-ef41658bf9f4 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:14.196155959 +0000 UTC m=+1048.857259187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert") pod "infra-operator-controller-manager-79d975b745-kp2bg" (UID: "dacd1beb-af59-4f30-8b76-ef41658bf9f4") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.196687 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6aead1_61fb_403f_9388_81c8d84a0588.slice/crio-533bd1b83d8c3a946e8bf0b78d48d32cc2c87dae22ade714f6d60c2cf0608f6c WatchSource:0}: Error finding container 533bd1b83d8c3a946e8bf0b78d48d32cc2c87dae22ade714f6d60c2cf0608f6c: Status 404 returned error can't find the container with id 533bd1b83d8c3a946e8bf0b78d48d32cc2c87dae22ade714f6d60c2cf0608f6c Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.199667 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9rdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-qvc2h_openstack-operators(cc6aead1-61fb-403f-9388-81c8d84a0588): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.200740 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" podUID="cc6aead1-61fb-403f-9388-81c8d84a0588" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.248704 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92"] Feb 19 13:27:12 crc kubenswrapper[4861]: W0219 13:27:12.262750 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod430ed7f7_365f_4636_86ce_d257a9203395.slice/crio-9a0164f59b6c2ac7803b5cb66412317e845821e20f4b64aca8e0d85c49ef8b60 WatchSource:0}: Error finding container 9a0164f59b6c2ac7803b5cb66412317e845821e20f4b64aca8e0d85c49ef8b60: Status 404 returned error can't find the container with id 9a0164f59b6c2ac7803b5cb66412317e845821e20f4b64aca8e0d85c49ef8b60 Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.512892 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.513059 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.513290 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert podName:40169e6a-2e88-4d48-8ca9-8153ae9a109b nodeName:}" failed. No retries permitted until 2026-02-19 13:27:14.51326428 +0000 UTC m=+1049.174367508 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" (UID: "40169e6a-2e88-4d48-8ca9-8153ae9a109b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.753928 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" event={"ID":"b62cf279-7b44-4aae-9417-4a9230a62e5e","Type":"ContainerStarted","Data":"ed89f7ec5e8ddce503beb0ff2df1f82e0bcb8f02042674f276522b513d814e0c"} Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.755310 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" podUID="b62cf279-7b44-4aae-9417-4a9230a62e5e" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.756589 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" event={"ID":"98d71c2d-33db-49a7-bb86-918858a91612","Type":"ContainerStarted","Data":"b360ac5a80ea5a62a892c0bae2202285e88e25d9fd3af073fdd2b36814c1da7c"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.765159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" event={"ID":"2a1a19ba-9308-4f92-97af-210cfbd20e18","Type":"ContainerStarted","Data":"fdec42d0d2192e7522a9c57acac5ee826280d62c4e35aae828172ceebcfc0817"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.785374 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" event={"ID":"b6fc0d09-ceeb-4f62-8dcd-277cd8f27371","Type":"ContainerStarted","Data":"d6ac26091cfef1c86f8c62046c55757005620b2ab036c64b1efe02f923dbb50d"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.802625 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" event={"ID":"4e0aba21-f157-4cdc-8b37-b043ed6298c7","Type":"ContainerStarted","Data":"ee3d25f831d62c7472554550976d7587af2128de7fd41d7eec4cbe9b2f686081"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.804404 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" event={"ID":"127c1b36-40d1-434a-803b-21cc75d9b41a","Type":"ContainerStarted","Data":"f6e46e2a5b358527efacd210f5317bf14a94faa9b0705503ee118e2794789422"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.805714 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" event={"ID":"9e8dc669-82a2-4d0a-bed3-7cb633ed2692","Type":"ContainerStarted","Data":"143f5bc1f7dddfac6c216431d0ee98dff113a9000cded0bdf9b32c760964ad7d"} Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.806806 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" podUID="9e8dc669-82a2-4d0a-bed3-7cb633ed2692" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.810854 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" event={"ID":"0bd94e11-4fa6-4d29-89a9-e2a493d94b89","Type":"ContainerStarted","Data":"4c718cb994584e5e4d9af0c54ce0fc8b5de35ee7ff23373c27172ac249100402"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.812037 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" event={"ID":"db3d27d2-0a91-4534-b995-3e42bdf891ab","Type":"ContainerStarted","Data":"d2f6fc952535505af3ae52a4bff531414f60128889c88330abde97918b0abf38"} Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.813494 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" podUID="db3d27d2-0a91-4534-b995-3e42bdf891ab" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.813740 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" event={"ID":"52fdb95f-0a68-4e4b-b205-06b492232999","Type":"ContainerStarted","Data":"28a80dc2896eae4615198300a68b8e7b5c6340ddc5f59b6e2803e08457a1b0cd"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.815761 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" event={"ID":"cc6aead1-61fb-403f-9388-81c8d84a0588","Type":"ContainerStarted","Data":"533bd1b83d8c3a946e8bf0b78d48d32cc2c87dae22ade714f6d60c2cf0608f6c"} Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.817208 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" podUID="cc6aead1-61fb-403f-9388-81c8d84a0588" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.817698 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" event={"ID":"4c04516b-4856-4f67-abf9-722af4a25ab6","Type":"ContainerStarted","Data":"bdc1843eca71f3a635c5e8c3eb1cc3856fe9f2a40eb581d1ce1fa31e016c59da"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.820301 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.820407 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.820474 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.820521 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:14.820506136 +0000 UTC m=+1049.481609364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "metrics-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.820634 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.820684 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:14.82067026 +0000 UTC m=+1049.481773488 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "webhook-server-cert" not found Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.822262 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" event={"ID":"8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4","Type":"ContainerStarted","Data":"205ee30e11d1d091bacf4fdb8cabbb9b7cbd5b6a9e3f5911ad5f8a59cb08816f"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.824369 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" event={"ID":"430ed7f7-365f-4636-86ce-d257a9203395","Type":"ContainerStarted","Data":"9a0164f59b6c2ac7803b5cb66412317e845821e20f4b64aca8e0d85c49ef8b60"} Feb 19 13:27:12 crc kubenswrapper[4861]: I0219 13:27:12.827093 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" event={"ID":"729ddab6-f042-425d-aa39-2d18efc216d6","Type":"ContainerStarted","Data":"f99f4eda9507dfc21b3d07596df884da3d0bd4598085549983f27c3e5ee132a7"} Feb 19 13:27:12 crc kubenswrapper[4861]: E0219 13:27:12.829163 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" podUID="729ddab6-f042-425d-aa39-2d18efc216d6" Feb 19 13:27:13 crc kubenswrapper[4861]: E0219 13:27:13.878856 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" podUID="9e8dc669-82a2-4d0a-bed3-7cb633ed2692" Feb 19 13:27:13 crc kubenswrapper[4861]: E0219 13:27:13.878975 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" podUID="b62cf279-7b44-4aae-9417-4a9230a62e5e" Feb 19 13:27:13 crc kubenswrapper[4861]: E0219 13:27:13.879052 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" podUID="db3d27d2-0a91-4534-b995-3e42bdf891ab" Feb 19 13:27:13 crc kubenswrapper[4861]: E0219 13:27:13.879161 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" podUID="729ddab6-f042-425d-aa39-2d18efc216d6" Feb 19 13:27:13 crc kubenswrapper[4861]: E0219 13:27:13.882099 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" podUID="cc6aead1-61fb-403f-9388-81c8d84a0588" Feb 19 13:27:14 crc kubenswrapper[4861]: I0219 13:27:14.242181 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.242378 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.242503 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert podName:dacd1beb-af59-4f30-8b76-ef41658bf9f4 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:18.242476598 +0000 UTC m=+1052.903579916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert") pod "infra-operator-controller-manager-79d975b745-kp2bg" (UID: "dacd1beb-af59-4f30-8b76-ef41658bf9f4") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: I0219 13:27:14.547178 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.547446 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.547523 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert podName:40169e6a-2e88-4d48-8ca9-8153ae9a109b nodeName:}" failed. No retries permitted until 2026-02-19 13:27:18.547504744 +0000 UTC m=+1053.208607972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" (UID: "40169e6a-2e88-4d48-8ca9-8153ae9a109b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: I0219 13:27:14.851778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.852015 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.852118 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:18.852091628 +0000 UTC m=+1053.513194896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "webhook-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.852175 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:27:14 crc kubenswrapper[4861]: I0219 13:27:14.852034 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:14 crc kubenswrapper[4861]: E0219 13:27:14.852248 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:18.852225622 +0000 UTC m=+1053.513328890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "metrics-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: I0219 13:27:18.313468 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.314038 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.314087 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert podName:dacd1beb-af59-4f30-8b76-ef41658bf9f4 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:26.314073491 +0000 UTC m=+1060.975176719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert") pod "infra-operator-controller-manager-79d975b745-kp2bg" (UID: "dacd1beb-af59-4f30-8b76-ef41658bf9f4") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: I0219 13:27:18.618757 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.618959 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.619050 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert podName:40169e6a-2e88-4d48-8ca9-8153ae9a109b nodeName:}" failed. No retries permitted until 2026-02-19 13:27:26.619026735 +0000 UTC m=+1061.280129963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" (UID: "40169e6a-2e88-4d48-8ca9-8153ae9a109b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: I0219 13:27:18.922631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:18 crc kubenswrapper[4861]: I0219 13:27:18.922690 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.922832 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.922890 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:26.922873119 +0000 UTC m=+1061.583976357 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "webhook-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.923097 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:27:18 crc kubenswrapper[4861]: E0219 13:27:18.923158 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:26.923142086 +0000 UTC m=+1061.584245314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "metrics-server-cert" not found Feb 19 13:27:24 crc kubenswrapper[4861]: E0219 13:27:24.584838 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 19 13:27:24 crc kubenswrapper[4861]: E0219 13:27:24.585727 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6mwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-f457f_openstack-operators(b6fc0d09-ceeb-4f62-8dcd-277cd8f27371): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:27:24 crc kubenswrapper[4861]: E0219 13:27:24.587057 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" podUID="b6fc0d09-ceeb-4f62-8dcd-277cd8f27371" Feb 19 13:27:24 crc kubenswrapper[4861]: E0219 13:27:24.960860 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" podUID="b6fc0d09-ceeb-4f62-8dcd-277cd8f27371" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.093810 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.094059 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzmfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-lgnlp_openstack-operators(52fdb95f-0a68-4e4b-b205-06b492232999): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.095669 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" podUID="52fdb95f-0a68-4e4b-b205-06b492232999" Feb 19 13:27:26 crc kubenswrapper[4861]: I0219 13:27:26.352315 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.352713 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.352803 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert podName:dacd1beb-af59-4f30-8b76-ef41658bf9f4 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:42.352780394 +0000 UTC m=+1077.013883662 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert") pod "infra-operator-controller-manager-79d975b745-kp2bg" (UID: "dacd1beb-af59-4f30-8b76-ef41658bf9f4") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: I0219 13:27:26.656689 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.656911 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.657222 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert podName:40169e6a-2e88-4d48-8ca9-8153ae9a109b nodeName:}" failed. No retries permitted until 2026-02-19 13:27:42.657201553 +0000 UTC m=+1077.318304781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" (UID: "40169e6a-2e88-4d48-8ca9-8153ae9a109b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.691393 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.691576 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5txxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-6q8nw_openstack-operators(0bd94e11-4fa6-4d29-89a9-e2a493d94b89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.692943 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" podUID="0bd94e11-4fa6-4d29-89a9-e2a493d94b89" Feb 19 13:27:26 crc kubenswrapper[4861]: I0219 13:27:26.960762 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:26 crc kubenswrapper[4861]: I0219 13:27:26.960807 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.960940 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.960998 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:42.960981335 +0000 UTC m=+1077.622084563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "webhook-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.960998 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.961064 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs podName:716d511a-dfec-4b60-b963-8cd3f03b6e43 nodeName:}" failed. No retries permitted until 2026-02-19 13:27:42.961043337 +0000 UTC m=+1077.622146605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-t5mvf" (UID: "716d511a-dfec-4b60-b963-8cd3f03b6e43") : secret "metrics-server-cert" not found Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.979941 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" podUID="0bd94e11-4fa6-4d29-89a9-e2a493d94b89" Feb 19 13:27:26 crc kubenswrapper[4861]: E0219 13:27:26.980064 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" podUID="52fdb95f-0a68-4e4b-b205-06b492232999" Feb 19 13:27:27 crc kubenswrapper[4861]: E0219 13:27:27.155470 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 19 13:27:27 crc kubenswrapper[4861]: E0219 13:27:27.156036 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-k4k92_openstack-operators(430ed7f7-365f-4636-86ce-d257a9203395): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:27:27 crc kubenswrapper[4861]: E0219 13:27:27.157590 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" podUID="430ed7f7-365f-4636-86ce-d257a9203395" Feb 19 13:27:27 crc kubenswrapper[4861]: I0219 13:27:27.992622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" event={"ID":"4401cea1-fce7-4ec1-938b-2519cf2a5521","Type":"ContainerStarted","Data":"eff8d4f05064d20b8f460d9486ca16ebd09a5fd835c8f0a6f67183573052b578"} Feb 19 13:27:27 crc kubenswrapper[4861]: I0219 13:27:27.993910 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.002877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" event={"ID":"a9582c92-ce65-4865-bbcf-57b8b3c7002c","Type":"ContainerStarted","Data":"a753fcdbe7fdf54127e962f79b2efe6528f0bffd5a4cf1c78d52e23984a98ef4"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.002913 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.005502 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" event={"ID":"2a1a19ba-9308-4f92-97af-210cfbd20e18","Type":"ContainerStarted","Data":"712c5addc09fd2bbc78306336a64f2449a67ce4015a310ca18d087f6c561d6ff"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.005828 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.016012 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" event={"ID":"4e0aba21-f157-4cdc-8b37-b043ed6298c7","Type":"ContainerStarted","Data":"046f4b3a56a1a986a8a9eff3733a07e8e5af4071ab044de1bcba2f011eb932eb"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.016684 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.028724 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" event={"ID":"49dd31ac-b688-453a-9701-001ce3063ea7","Type":"ContainerStarted","Data":"5d61c6f018a9f1ef1da65681de8157ef8f594c71922caedd6c601c7c1b7fe9a1"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.029357 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.047270 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" event={"ID":"8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4","Type":"ContainerStarted","Data":"e8140e6d4547b057b67adeb09e8852532513f1e29c3f78a5571e50ad60e74d32"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.047890 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.066395 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" event={"ID":"2e56e0d7-2b43-4c87-912b-e91661077fcf","Type":"ContainerStarted","Data":"c14f592559fa791b1f5c5ad2f3d12613bf1690ddbed1be0b384bf8d730193620"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.067010 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.077077 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" podStartSLOduration=3.097907566 podStartE2EDuration="18.077062459s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.691690039 +0000 UTC m=+1046.352793267" lastFinishedPulling="2026-02-19 13:27:26.670844932 +0000 UTC m=+1061.331948160" observedRunningTime="2026-02-19 13:27:28.03933195 +0000 UTC m=+1062.700435178" watchObservedRunningTime="2026-02-19 13:27:28.077062459 +0000 UTC m=+1062.738165687" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.077403 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" podStartSLOduration=2.64880081 podStartE2EDuration="18.077398559s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.70251945 +0000 UTC m=+1046.363622678" lastFinishedPulling="2026-02-19 13:27:27.131117169 +0000 UTC m=+1061.792220427" observedRunningTime="2026-02-19 13:27:28.071370565 +0000 UTC m=+1062.732473793" watchObservedRunningTime="2026-02-19 13:27:28.077398559 +0000 UTC m=+1062.738501787" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.089868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" event={"ID":"98d71c2d-33db-49a7-bb86-918858a91612","Type":"ContainerStarted","Data":"ca68e80dd7fa2f9ae052906953b4e4627c4f3cd16d202adffdaa8a6d856b215e"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.090149 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.107722 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" event={"ID":"a3459958-b0c6-41f2-afb6-0a9a15ca3837","Type":"ContainerStarted","Data":"c89eb76381a7b72339b9261e6ed79c561dc78139266ad7bcdffff13e41cf728c"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.108345 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.112797 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" podStartSLOduration=2.833462167 podStartE2EDuration="18.112782534s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.848322028 +0000 UTC m=+1046.509425256" lastFinishedPulling="2026-02-19 13:27:27.127642395 +0000 UTC m=+1061.788745623" observedRunningTime="2026-02-19 13:27:28.111132849 +0000 UTC m=+1062.772236087" watchObservedRunningTime="2026-02-19 13:27:28.112782534 +0000 UTC m=+1062.773885752" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.127156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" event={"ID":"127c1b36-40d1-434a-803b-21cc75d9b41a","Type":"ContainerStarted","Data":"8364a4879788d4ca563ee78837e8fd32d1de3c3f316f89b75397b4ce85c9718d"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.127815 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.130126 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" event={"ID":"4c04516b-4856-4f67-abf9-722af4a25ab6","Type":"ContainerStarted","Data":"8c30b1d64911580a0777ca494c516b59ee0a6637d54dbafc19269f6368e9a02c"} Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.130152 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:28 crc kubenswrapper[4861]: E0219 13:27:28.132644 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" podUID="430ed7f7-365f-4636-86ce-d257a9203395" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.175992 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" podStartSLOduration=2.914542664 podStartE2EDuration="18.1759777s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.409412556 +0000 UTC m=+1046.070515784" lastFinishedPulling="2026-02-19 13:27:26.670847572 +0000 UTC m=+1061.331950820" observedRunningTime="2026-02-19 13:27:28.172896087 +0000 UTC m=+1062.833999315" watchObservedRunningTime="2026-02-19 13:27:28.1759777 +0000 UTC m=+1062.837080918" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.200538 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" podStartSLOduration=3.174404951 podStartE2EDuration="18.200515952s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.10210089 +0000 UTC m=+1046.763204128" lastFinishedPulling="2026-02-19 13:27:27.128211901 +0000 UTC m=+1061.789315129" observedRunningTime="2026-02-19 13:27:28.156044062 +0000 UTC m=+1062.817147280" watchObservedRunningTime="2026-02-19 13:27:28.200515952 +0000 UTC m=+1062.861619170" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.206612 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" podStartSLOduration=2.963306722 podStartE2EDuration="18.206593737s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.882025928 +0000 UTC m=+1046.543129156" lastFinishedPulling="2026-02-19 13:27:27.125312943 +0000 UTC m=+1061.786416171" observedRunningTime="2026-02-19 13:27:28.198904929 +0000 UTC m=+1062.860008157" watchObservedRunningTime="2026-02-19 13:27:28.206593737 +0000 UTC m=+1062.867696965" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.262409 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" podStartSLOduration=2.865559904 podStartE2EDuration="18.262389833s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.729694155 +0000 UTC m=+1046.390797383" lastFinishedPulling="2026-02-19 13:27:27.126524084 +0000 UTC m=+1061.787627312" observedRunningTime="2026-02-19 13:27:28.240894622 +0000 UTC m=+1062.901997850" watchObservedRunningTime="2026-02-19 13:27:28.262389833 +0000 UTC m=+1062.923493051" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.280610 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" podStartSLOduration=2.584638358 podStartE2EDuration="18.280589895s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.430620919 +0000 UTC m=+1046.091724147" lastFinishedPulling="2026-02-19 13:27:27.126572456 +0000 UTC m=+1061.787675684" observedRunningTime="2026-02-19 13:27:28.276616687 +0000 UTC m=+1062.937719925" watchObservedRunningTime="2026-02-19 13:27:28.280589895 +0000 UTC m=+1062.941693123" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.296881 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" podStartSLOduration=3.039605842 podStartE2EDuration="18.296864824s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.873836837 +0000 UTC m=+1046.534940065" lastFinishedPulling="2026-02-19 13:27:27.131095819 +0000 UTC m=+1061.792199047" observedRunningTime="2026-02-19 13:27:28.29228491 +0000 UTC m=+1062.953388138" watchObservedRunningTime="2026-02-19 13:27:28.296864824 +0000 UTC m=+1062.957968042" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.321860 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" podStartSLOduration=3.073326531 podStartE2EDuration="18.321845158s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.887449133 +0000 UTC m=+1046.548552351" lastFinishedPulling="2026-02-19 13:27:27.13596775 +0000 UTC m=+1061.797070978" observedRunningTime="2026-02-19 13:27:28.31856349 +0000 UTC m=+1062.979666718" watchObservedRunningTime="2026-02-19 13:27:28.321845158 +0000 UTC m=+1062.982948386" Feb 19 13:27:28 crc kubenswrapper[4861]: I0219 13:27:28.363970 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" podStartSLOduration=2.460505536 podStartE2EDuration="18.363937795s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.224127814 +0000 UTC m=+1045.885231042" lastFinishedPulling="2026-02-19 13:27:27.127560043 +0000 UTC m=+1061.788663301" observedRunningTime="2026-02-19 13:27:28.349486755 +0000 UTC m=+1063.010589983" watchObservedRunningTime="2026-02-19 13:27:28.363937795 +0000 UTC m=+1063.025041023" Feb 19 13:27:33 crc kubenswrapper[4861]: I0219 13:27:33.834501 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:27:33 crc kubenswrapper[4861]: I0219 13:27:33.835196 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:27:33 crc kubenswrapper[4861]: I0219 13:27:33.835262 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:27:33 crc kubenswrapper[4861]: I0219 13:27:33.836163 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b97bdd517e8a4057d6d42657d06891ca0d7f0204df355e8596a23050ecb1ab6b"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:27:33 crc kubenswrapper[4861]: I0219 13:27:33.836261 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://b97bdd517e8a4057d6d42657d06891ca0d7f0204df355e8596a23050ecb1ab6b" gracePeriod=600 Feb 19 13:27:37 crc kubenswrapper[4861]: I0219 13:27:37.546677 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="b97bdd517e8a4057d6d42657d06891ca0d7f0204df355e8596a23050ecb1ab6b" exitCode=0 Feb 19 13:27:37 crc kubenswrapper[4861]: I0219 13:27:37.546802 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"b97bdd517e8a4057d6d42657d06891ca0d7f0204df355e8596a23050ecb1ab6b"} Feb 19 13:27:37 crc kubenswrapper[4861]: I0219 13:27:37.547088 4861 scope.go:117] "RemoveContainer" containerID="172ce433d46e388504efbd8038cf7a4f97b7e544c89545b0b9a675e189350528" Feb 19 13:27:38 crc kubenswrapper[4861]: E0219 13:27:38.535695 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 19 13:27:38 crc kubenswrapper[4861]: E0219 13:27:38.535960 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtnx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-mlk9q_openstack-operators(b62cf279-7b44-4aae-9417-4a9230a62e5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:27:38 crc kubenswrapper[4861]: E0219 13:27:38.538017 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" podUID="b62cf279-7b44-4aae-9417-4a9230a62e5e" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.568808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" event={"ID":"b6fc0d09-ceeb-4f62-8dcd-277cd8f27371","Type":"ContainerStarted","Data":"afcec8848215d2b56152dda6886958ebf4cf0e47d4c3d7ce02c36f025fb48d93"} Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.569233 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.570726 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" event={"ID":"db3d27d2-0a91-4534-b995-3e42bdf891ab","Type":"ContainerStarted","Data":"ab19da4354e8a2f3947092491d880b369843d8330e61bcc04222e85a072e8ac7"} Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.570865 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.572624 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" event={"ID":"cc6aead1-61fb-403f-9388-81c8d84a0588","Type":"ContainerStarted","Data":"047433f9d164c083d11863ea8b74d09ddb193aa629544c236f191e90b8acbf30"} Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.572795 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.574616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" event={"ID":"9e8dc669-82a2-4d0a-bed3-7cb633ed2692","Type":"ContainerStarted","Data":"b845c9e0187ac3f83eb0f7b6a3b03848e0a6b43414abcd02d6c32707455d33e3"} Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.574805 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.576442 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" event={"ID":"729ddab6-f042-425d-aa39-2d18efc216d6","Type":"ContainerStarted","Data":"cf3783c3beb72e3a8e524fa31d3983d13e4609dd3be56cef7fa53de3a27b1ecb"} Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.576650 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.580321 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"a8231b7d6cc8b5ea6124bfdb8ee2cfd7fd221648893a967e4427c88c18dc3ef9"} Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.594851 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" podStartSLOduration=2.524714571 podStartE2EDuration="30.594834737s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.045603284 +0000 UTC m=+1046.706706512" lastFinishedPulling="2026-02-19 13:27:40.11572342 +0000 UTC m=+1074.776826678" observedRunningTime="2026-02-19 13:27:40.589697648 +0000 UTC m=+1075.250800886" watchObservedRunningTime="2026-02-19 13:27:40.594834737 +0000 UTC m=+1075.255937965" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.623217 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" podStartSLOduration=2.7070519219999998 podStartE2EDuration="30.623198252s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.199598071 +0000 UTC m=+1046.860701299" lastFinishedPulling="2026-02-19 13:27:40.115744401 +0000 UTC m=+1074.776847629" observedRunningTime="2026-02-19 13:27:40.621867496 +0000 UTC m=+1075.282970724" watchObservedRunningTime="2026-02-19 13:27:40.623198252 +0000 UTC m=+1075.284301480" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.643736 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" podStartSLOduration=2.631220656 podStartE2EDuration="30.643713857s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.102960572 +0000 UTC m=+1046.764063800" lastFinishedPulling="2026-02-19 13:27:40.115453783 +0000 UTC m=+1074.776557001" observedRunningTime="2026-02-19 13:27:40.640916261 +0000 UTC m=+1075.302019489" watchObservedRunningTime="2026-02-19 13:27:40.643713857 +0000 UTC m=+1075.304817085" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.667282 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" podStartSLOduration=2.7455711430000003 podStartE2EDuration="30.667260102s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.194212747 +0000 UTC m=+1046.855315975" lastFinishedPulling="2026-02-19 13:27:40.115901706 +0000 UTC m=+1074.777004934" observedRunningTime="2026-02-19 13:27:40.65978516 +0000 UTC m=+1075.320888418" watchObservedRunningTime="2026-02-19 13:27:40.667260102 +0000 UTC m=+1075.328363330" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.668940 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tx4wr" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.679593 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nmcsj" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.683239 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" podStartSLOduration=2.695688546 podStartE2EDuration="30.683224533s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.128204174 +0000 UTC m=+1046.789307402" lastFinishedPulling="2026-02-19 13:27:40.115740141 +0000 UTC m=+1074.776843389" observedRunningTime="2026-02-19 13:27:40.678136495 +0000 UTC m=+1075.339239723" watchObservedRunningTime="2026-02-19 13:27:40.683224533 +0000 UTC m=+1075.344327761" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.753724 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hlnqx" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.869826 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-g942s" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.947640 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7lwn6" Feb 19 13:27:40 crc kubenswrapper[4861]: I0219 13:27:40.957590 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-g76jd" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.012035 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rgfz5" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.071307 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xtsr5" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.110907 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-x8xxz" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.111384 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-b24sf" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.234406 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-8nbvg" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.588302 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" event={"ID":"52fdb95f-0a68-4e4b-b205-06b492232999","Type":"ContainerStarted","Data":"d98dac0415123fba064128beb6f116ff7a07482a97e7d7cb7cf85f857b25ae23"} Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.590782 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.617406 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" podStartSLOduration=2.217351451 podStartE2EDuration="31.617387945s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.036516519 +0000 UTC m=+1046.697619747" lastFinishedPulling="2026-02-19 13:27:41.436553013 +0000 UTC m=+1076.097656241" observedRunningTime="2026-02-19 13:27:41.615032031 +0000 UTC m=+1076.276135289" watchObservedRunningTime="2026-02-19 13:27:41.617387945 +0000 UTC m=+1076.278491183" Feb 19 13:27:41 crc kubenswrapper[4861]: I0219 13:27:41.980208 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:27:42 crc kubenswrapper[4861]: I0219 13:27:42.449534 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:42 crc kubenswrapper[4861]: I0219 13:27:42.456623 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dacd1beb-af59-4f30-8b76-ef41658bf9f4-cert\") pod \"infra-operator-controller-manager-79d975b745-kp2bg\" (UID: \"dacd1beb-af59-4f30-8b76-ef41658bf9f4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:42 crc kubenswrapper[4861]: I0219 13:27:42.652748 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:42 crc kubenswrapper[4861]: I0219 13:27:42.755264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:42 crc kubenswrapper[4861]: I0219 13:27:42.760120 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40169e6a-2e88-4d48-8ca9-8153ae9a109b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g\" (UID: \"40169e6a-2e88-4d48-8ca9-8153ae9a109b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:42 crc kubenswrapper[4861]: I0219 13:27:42.987491 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.059296 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.059371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.063662 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.063834 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/716d511a-dfec-4b60-b963-8cd3f03b6e43-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-t5mvf\" (UID: \"716d511a-dfec-4b60-b963-8cd3f03b6e43\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.160240 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg"] Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.160818 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.263940 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g"] Feb 19 13:27:43 crc kubenswrapper[4861]: W0219 13:27:43.280648 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40169e6a_2e88_4d48_8ca9_8153ae9a109b.slice/crio-5f1ba0fe7c30287490c818aa4b3fe1c67131a2ccb981b57090d062fddbfbf81f WatchSource:0}: Error finding container 5f1ba0fe7c30287490c818aa4b3fe1c67131a2ccb981b57090d062fddbfbf81f: Status 404 returned error can't find the container with id 5f1ba0fe7c30287490c818aa4b3fe1c67131a2ccb981b57090d062fddbfbf81f Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.603126 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf"] Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.611240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" event={"ID":"0bd94e11-4fa6-4d29-89a9-e2a493d94b89","Type":"ContainerStarted","Data":"86b4a21e2767ce8bf1ccd4a49be9263963b7bdf6b139004e9c29d04f4a07365a"} Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.611467 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:43 crc kubenswrapper[4861]: W0219 13:27:43.612257 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod716d511a_dfec_4b60_b963_8cd3f03b6e43.slice/crio-8f0f4b41d663728b0553047b3748e3ba4ef5f7bb89148861b9ecf395e5965248 WatchSource:0}: Error finding container 8f0f4b41d663728b0553047b3748e3ba4ef5f7bb89148861b9ecf395e5965248: Status 404 returned error can't find the container with id 8f0f4b41d663728b0553047b3748e3ba4ef5f7bb89148861b9ecf395e5965248 Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.614160 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" event={"ID":"430ed7f7-365f-4636-86ce-d257a9203395","Type":"ContainerStarted","Data":"3f3e0babd9437140bbafc2d4224914c5917bd7ccceb1b8b1a2227f36fffc9d9f"} Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.616494 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" event={"ID":"40169e6a-2e88-4d48-8ca9-8153ae9a109b","Type":"ContainerStarted","Data":"5f1ba0fe7c30287490c818aa4b3fe1c67131a2ccb981b57090d062fddbfbf81f"} Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.618015 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" event={"ID":"dacd1beb-af59-4f30-8b76-ef41658bf9f4","Type":"ContainerStarted","Data":"92fd623cf4503ae7a897585c46e03704c11c9df81ffe75646a8c5b2f8caea184"} Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.668867 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" podStartSLOduration=3.098121281 podStartE2EDuration="33.668836444s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:11.843167718 +0000 UTC m=+1046.504270946" lastFinishedPulling="2026-02-19 13:27:42.413882851 +0000 UTC m=+1077.074986109" observedRunningTime="2026-02-19 13:27:43.638982698 +0000 UTC m=+1078.300085946" watchObservedRunningTime="2026-02-19 13:27:43.668836444 +0000 UTC m=+1078.329939712" Feb 19 13:27:43 crc kubenswrapper[4861]: I0219 13:27:43.676163 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4k92" podStartSLOduration=2.529328353 podStartE2EDuration="32.676147561s" podCreationTimestamp="2026-02-19 13:27:11 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.265312326 +0000 UTC m=+1046.926415554" lastFinishedPulling="2026-02-19 13:27:42.412131524 +0000 UTC m=+1077.073234762" observedRunningTime="2026-02-19 13:27:43.673046417 +0000 UTC m=+1078.334149655" watchObservedRunningTime="2026-02-19 13:27:43.676147561 +0000 UTC m=+1078.337250789" Feb 19 13:27:44 crc kubenswrapper[4861]: I0219 13:27:44.626406 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" event={"ID":"716d511a-dfec-4b60-b963-8cd3f03b6e43","Type":"ContainerStarted","Data":"5d4a2f239d581be81c5c4cc3c5fc502ac693133cdfa7bb0ef5d953faa067ad6c"} Feb 19 13:27:44 crc kubenswrapper[4861]: I0219 13:27:44.626474 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" event={"ID":"716d511a-dfec-4b60-b963-8cd3f03b6e43","Type":"ContainerStarted","Data":"8f0f4b41d663728b0553047b3748e3ba4ef5f7bb89148861b9ecf395e5965248"} Feb 19 13:27:44 crc kubenswrapper[4861]: I0219 13:27:44.663087 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" podStartSLOduration=34.663065617 podStartE2EDuration="34.663065617s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:27:44.656998444 +0000 UTC m=+1079.318101672" watchObservedRunningTime="2026-02-19 13:27:44.663065617 +0000 UTC m=+1079.324168845" Feb 19 13:27:45 crc kubenswrapper[4861]: I0219 13:27:45.637380 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:27:46 crc kubenswrapper[4861]: I0219 13:27:46.649003 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" event={"ID":"40169e6a-2e88-4d48-8ca9-8153ae9a109b","Type":"ContainerStarted","Data":"a3b776ce698322b5f37c9f3e697b01ac97936a862cc37e8a3b5546c32228f633"} Feb 19 13:27:46 crc kubenswrapper[4861]: I0219 13:27:46.649297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:46 crc kubenswrapper[4861]: I0219 13:27:46.653487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" event={"ID":"dacd1beb-af59-4f30-8b76-ef41658bf9f4","Type":"ContainerStarted","Data":"07939142acc7c8426a0a96ddc41c89e106eabf575a9996c55ce325ec5dd597a6"} Feb 19 13:27:46 crc kubenswrapper[4861]: I0219 13:27:46.703775 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" podStartSLOduration=34.405277957 podStartE2EDuration="36.703745886s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:43.184408904 +0000 UTC m=+1077.845512132" lastFinishedPulling="2026-02-19 13:27:45.482876833 +0000 UTC m=+1080.143980061" observedRunningTime="2026-02-19 13:27:46.70353526 +0000 UTC m=+1081.364638528" watchObservedRunningTime="2026-02-19 13:27:46.703745886 +0000 UTC m=+1081.364849154" Feb 19 13:27:46 crc kubenswrapper[4861]: I0219 13:27:46.711890 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" podStartSLOduration=34.492930593 podStartE2EDuration="36.711865314s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:43.282868372 +0000 UTC m=+1077.943971600" lastFinishedPulling="2026-02-19 13:27:45.501803083 +0000 UTC m=+1080.162906321" observedRunningTime="2026-02-19 13:27:46.683387375 +0000 UTC m=+1081.344490673" watchObservedRunningTime="2026-02-19 13:27:46.711865314 +0000 UTC m=+1081.372968582" Feb 19 13:27:47 crc kubenswrapper[4861]: I0219 13:27:47.663912 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:50 crc kubenswrapper[4861]: I0219 13:27:50.998936 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-6q8nw" Feb 19 13:27:51 crc kubenswrapper[4861]: I0219 13:27:51.136309 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lgnlp" Feb 19 13:27:51 crc kubenswrapper[4861]: I0219 13:27:51.177062 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-x8bv6" Feb 19 13:27:51 crc kubenswrapper[4861]: I0219 13:27:51.220326 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qvc2h" Feb 19 13:27:51 crc kubenswrapper[4861]: I0219 13:27:51.268097 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jn5fv" Feb 19 13:27:51 crc kubenswrapper[4861]: I0219 13:27:51.284282 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-f457f" Feb 19 13:27:51 crc kubenswrapper[4861]: I0219 13:27:51.284709 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-wvd7g" Feb 19 13:27:52 crc kubenswrapper[4861]: I0219 13:27:52.661956 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-kp2bg" Feb 19 13:27:52 crc kubenswrapper[4861]: E0219 13:27:52.985493 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" podUID="b62cf279-7b44-4aae-9417-4a9230a62e5e" Feb 19 13:27:52 crc kubenswrapper[4861]: I0219 13:27:52.996506 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g" Feb 19 13:27:53 crc kubenswrapper[4861]: I0219 13:27:53.170307 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-t5mvf" Feb 19 13:28:05 crc kubenswrapper[4861]: I0219 13:28:05.235841 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" event={"ID":"b62cf279-7b44-4aae-9417-4a9230a62e5e","Type":"ContainerStarted","Data":"cf0e940d89e4e9abe6214c493adefe52daa8b0aaba39ef4ce16ba4a5fb35c46f"} Feb 19 13:28:05 crc kubenswrapper[4861]: I0219 13:28:05.236746 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:28:05 crc kubenswrapper[4861]: I0219 13:28:05.267017 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" podStartSLOduration=2.902258413 podStartE2EDuration="55.266993585s" podCreationTimestamp="2026-02-19 13:27:10 +0000 UTC" firstStartedPulling="2026-02-19 13:27:12.111184315 +0000 UTC m=+1046.772287543" lastFinishedPulling="2026-02-19 13:28:04.475919447 +0000 UTC m=+1099.137022715" observedRunningTime="2026-02-19 13:28:05.261558579 +0000 UTC m=+1099.922661837" watchObservedRunningTime="2026-02-19 13:28:05.266993585 +0000 UTC m=+1099.928096853" Feb 19 13:28:11 crc kubenswrapper[4861]: I0219 13:28:11.205739 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mlk9q" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.729203 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2xjnq"] Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.731605 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.735208 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.735220 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.735345 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.736898 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-264sd" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.751347 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2xjnq"] Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.810701 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-qvbz9"] Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.811795 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.816327 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.821193 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-qvbz9"] Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.876476 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlbjs\" (UniqueName: \"kubernetes.io/projected/8f05a7b3-b044-4971-9805-0fc1860f53de-kube-api-access-tlbjs\") pod \"dnsmasq-dns-855cbc58c5-2xjnq\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.876559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f05a7b3-b044-4971-9805-0fc1860f53de-config\") pod \"dnsmasq-dns-855cbc58c5-2xjnq\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.977492 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.977871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlbjs\" (UniqueName: \"kubernetes.io/projected/8f05a7b3-b044-4971-9805-0fc1860f53de-kube-api-access-tlbjs\") pod \"dnsmasq-dns-855cbc58c5-2xjnq\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.977988 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f05a7b3-b044-4971-9805-0fc1860f53de-config\") pod \"dnsmasq-dns-855cbc58c5-2xjnq\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.978028 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-config\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.978080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2z6\" (UniqueName: \"kubernetes.io/projected/e4865151-bee3-4d02-bb44-0ba62bdbb741-kube-api-access-lj2z6\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:26 crc kubenswrapper[4861]: I0219 13:28:26.979082 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f05a7b3-b044-4971-9805-0fc1860f53de-config\") pod \"dnsmasq-dns-855cbc58c5-2xjnq\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.016004 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlbjs\" (UniqueName: \"kubernetes.io/projected/8f05a7b3-b044-4971-9805-0fc1860f53de-kube-api-access-tlbjs\") pod \"dnsmasq-dns-855cbc58c5-2xjnq\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.054036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.080067 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-config\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.080184 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2z6\" (UniqueName: \"kubernetes.io/projected/e4865151-bee3-4d02-bb44-0ba62bdbb741-kube-api-access-lj2z6\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.080274 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.081295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-config\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.082154 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.114105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2z6\" (UniqueName: \"kubernetes.io/projected/e4865151-bee3-4d02-bb44-0ba62bdbb741-kube-api-access-lj2z6\") pod \"dnsmasq-dns-6fcf94d689-qvbz9\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.132891 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.420197 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-qvbz9"] Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.458324 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" event={"ID":"e4865151-bee3-4d02-bb44-0ba62bdbb741","Type":"ContainerStarted","Data":"5757e1a7ffa3a4df5db8f241438d3ff432c6d49a385db2db5f24b2809af6dd44"} Feb 19 13:28:27 crc kubenswrapper[4861]: I0219 13:28:27.559678 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2xjnq"] Feb 19 13:28:27 crc kubenswrapper[4861]: W0219 13:28:27.564909 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f05a7b3_b044_4971_9805_0fc1860f53de.slice/crio-4d15e824f4badd0eca48fee43b892f691a12fff7c7c9f90f29aefc2127ece97a WatchSource:0}: Error finding container 4d15e824f4badd0eca48fee43b892f691a12fff7c7c9f90f29aefc2127ece97a: Status 404 returned error can't find the container with id 4d15e824f4badd0eca48fee43b892f691a12fff7c7c9f90f29aefc2127ece97a Feb 19 13:28:28 crc kubenswrapper[4861]: I0219 13:28:28.468520 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" event={"ID":"8f05a7b3-b044-4971-9805-0fc1860f53de","Type":"ContainerStarted","Data":"4d15e824f4badd0eca48fee43b892f691a12fff7c7c9f90f29aefc2127ece97a"} Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.716841 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-qvbz9"] Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.746685 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-g5rq8"] Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.748130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.765326 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-g5rq8"] Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.950114 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-config\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.950269 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24kf\" (UniqueName: \"kubernetes.io/projected/8b058250-c5d7-4028-a1d0-79071989d140-kube-api-access-x24kf\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:29 crc kubenswrapper[4861]: I0219 13:28:29.950302 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-dns-svc\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.051332 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24kf\" (UniqueName: \"kubernetes.io/projected/8b058250-c5d7-4028-a1d0-79071989d140-kube-api-access-x24kf\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.051381 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-dns-svc\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.051424 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-config\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.052342 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-config\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.052971 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-dns-svc\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.086653 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24kf\" (UniqueName: \"kubernetes.io/projected/8b058250-c5d7-4028-a1d0-79071989d140-kube-api-access-x24kf\") pod \"dnsmasq-dns-f54874ffc-g5rq8\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.098721 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2xjnq"] Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.165397 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-2k898"] Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.166690 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.171662 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-2k898"] Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.259848 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b47mw\" (UniqueName: \"kubernetes.io/projected/7fe31df8-282b-447b-956e-7e6ae9b9d52b-kube-api-access-b47mw\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.259937 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-dns-svc\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.259998 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-config\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.361397 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b47mw\" (UniqueName: \"kubernetes.io/projected/7fe31df8-282b-447b-956e-7e6ae9b9d52b-kube-api-access-b47mw\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.361713 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-dns-svc\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.361856 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-config\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.362663 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-dns-svc\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.363044 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-config\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.367671 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.392348 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b47mw\" (UniqueName: \"kubernetes.io/projected/7fe31df8-282b-447b-956e-7e6ae9b9d52b-kube-api-access-b47mw\") pod \"dnsmasq-dns-67ff45466c-2k898\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.495697 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.883533 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-g5rq8"] Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.967293 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.968558 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.970962 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.971471 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bphs4" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.971823 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.972318 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.972497 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.972780 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.980272 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 13:28:30 crc kubenswrapper[4861]: I0219 13:28:30.982464 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.020079 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-2k898"] Feb 19 13:28:31 crc kubenswrapper[4861]: W0219 13:28:31.034798 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fe31df8_282b_447b_956e_7e6ae9b9d52b.slice/crio-7dbd7b61948cd2cf38f2c040d998e60ff1c7c2013ddb6c2c12b0b01b8166cc32 WatchSource:0}: Error finding container 7dbd7b61948cd2cf38f2c040d998e60ff1c7c2013ddb6c2c12b0b01b8166cc32: Status 404 returned error can't find the container with id 7dbd7b61948cd2cf38f2c040d998e60ff1c7c2013ddb6c2c12b0b01b8166cc32 Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.074215 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.074266 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.074300 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.074351 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.074385 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b117524a-eaad-4666-9e0e-bda909b2ad30-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.074731 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.075524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcj6\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-kube-api-access-wkcj6\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.075573 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b117524a-eaad-4666-9e0e-bda909b2ad30-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.075629 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.075747 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.075774 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177677 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177726 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177779 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b117524a-eaad-4666-9e0e-bda909b2ad30-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177823 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcj6\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-kube-api-access-wkcj6\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.177873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b117524a-eaad-4666-9e0e-bda909b2ad30-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.178018 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.178046 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.178072 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.179705 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.180612 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.185309 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.185722 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.187141 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.187785 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.213785 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcj6\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-kube-api-access-wkcj6\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.230117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.248479 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b117524a-eaad-4666-9e0e-bda909b2ad30-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.271283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b117524a-eaad-4666-9e0e-bda909b2ad30-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.277287 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.288065 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.293968 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298154 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298329 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298446 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298491 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s4xvs" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298624 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298715 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.298820 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.311626 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.313307 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.380978 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381029 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381078 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381136 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe64a04b-1266-4b02-88e5-191f4a974422-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381241 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381283 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381298 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vkt\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-kube-api-access-z7vkt\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381316 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe64a04b-1266-4b02-88e5-191f4a974422-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381366 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.381387 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483090 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483120 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe64a04b-1266-4b02-88e5-191f4a974422-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483198 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483241 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vkt\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-kube-api-access-z7vkt\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483272 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe64a04b-1266-4b02-88e5-191f4a974422-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483300 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.483374 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.484207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.484614 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.484745 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.484758 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.484889 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.484999 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.489975 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.490059 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe64a04b-1266-4b02-88e5-191f4a974422-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.490316 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.491062 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe64a04b-1266-4b02-88e5-191f4a974422-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.502217 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vkt\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-kube-api-access-z7vkt\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.510946 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.516255 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" event={"ID":"8b058250-c5d7-4028-a1d0-79071989d140","Type":"ContainerStarted","Data":"55a19fe1a69d04f98b3636ec5a31c6a9d523276ab7c8abcea941d23b74c7c0a6"} Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.524693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-2k898" event={"ID":"7fe31df8-282b-447b-956e-7e6ae9b9d52b","Type":"ContainerStarted","Data":"7dbd7b61948cd2cf38f2c040d998e60ff1c7c2013ddb6c2c12b0b01b8166cc32"} Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.605417 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:28:31 crc kubenswrapper[4861]: I0219 13:28:31.640849 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.208747 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.283079 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:28:32 crc kubenswrapper[4861]: W0219 13:28:32.309628 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb117524a_eaad_4666_9e0e_bda909b2ad30.slice/crio-4c331a487f3a55d9c2ebb39a7cc7eb25a9fa83b8847228fefe48276beb268f35 WatchSource:0}: Error finding container 4c331a487f3a55d9c2ebb39a7cc7eb25a9fa83b8847228fefe48276beb268f35: Status 404 returned error can't find the container with id 4c331a487f3a55d9c2ebb39a7cc7eb25a9fa83b8847228fefe48276beb268f35 Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.342282 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.345156 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.350499 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.350526 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.350635 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tkx7k" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.351015 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.354541 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.355035 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508021 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-default\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508094 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwk77\" (UniqueName: \"kubernetes.io/projected/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kube-api-access-mwk77\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508236 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508280 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508324 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508394 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.508447 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kolla-config\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.542328 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe64a04b-1266-4b02-88e5-191f4a974422","Type":"ContainerStarted","Data":"ed72ed1d3af93b4785b2d55b81c21bbe190549115677449dba63b55ba7a965f1"} Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.543833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b117524a-eaad-4666-9e0e-bda909b2ad30","Type":"ContainerStarted","Data":"4c331a487f3a55d9c2ebb39a7cc7eb25a9fa83b8847228fefe48276beb268f35"} Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610471 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610520 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610550 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610583 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610607 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610623 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kolla-config\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610658 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-default\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.610680 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwk77\" (UniqueName: \"kubernetes.io/projected/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kube-api-access-mwk77\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.613782 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.615548 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.621668 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.622401 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kolla-config\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.623499 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-default\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.630219 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.633148 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.670598 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.681619 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwk77\" (UniqueName: \"kubernetes.io/projected/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kube-api-access-mwk77\") pod \"openstack-galera-0\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " pod="openstack/openstack-galera-0" Feb 19 13:28:32 crc kubenswrapper[4861]: I0219 13:28:32.964045 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.720927 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.725991 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.730243 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.730576 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.730643 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.731230 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.731982 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kz52f" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.927993 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928049 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhpm\" (UniqueName: \"kubernetes.io/projected/77e9ae58-534e-4312-8b56-9ec6708995ac-kube-api-access-wfhpm\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928204 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928329 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928536 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928815 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:33 crc kubenswrapper[4861]: I0219 13:28:33.928913 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.020922 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.022351 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.024727 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.024918 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.025045 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ljpb5" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030242 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030263 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfhpm\" (UniqueName: \"kubernetes.io/projected/77e9ae58-534e-4312-8b56-9ec6708995ac-kube-api-access-wfhpm\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030322 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030354 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030382 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030437 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.030761 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.031717 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.032440 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.033332 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.033358 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.036945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.045945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.056693 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.059015 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfhpm\" (UniqueName: \"kubernetes.io/projected/77e9ae58-534e-4312-8b56-9ec6708995ac-kube-api-access-wfhpm\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.078094 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.131582 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.131630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kolla-config\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.131764 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-config-data\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.131821 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.131923 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kube-api-access-9mdlw\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.235682 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.235822 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kolla-config\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.236085 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-config-data\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.236166 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.236449 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kube-api-access-9mdlw\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.237366 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-config-data\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.237464 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kolla-config\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.239807 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.240026 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.265004 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kube-api-access-9mdlw\") pod \"memcached-0\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " pod="openstack/memcached-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.349601 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 13:28:34 crc kubenswrapper[4861]: I0219 13:28:34.430050 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.444948 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.446067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.456007 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zqtrp" Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.460340 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.474988 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kjfl\" (UniqueName: \"kubernetes.io/projected/32264ced-78d0-432c-8dba-6b312fc09f77-kube-api-access-6kjfl\") pod \"kube-state-metrics-0\" (UID: \"32264ced-78d0-432c-8dba-6b312fc09f77\") " pod="openstack/kube-state-metrics-0" Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.575866 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kjfl\" (UniqueName: \"kubernetes.io/projected/32264ced-78d0-432c-8dba-6b312fc09f77-kube-api-access-6kjfl\") pod \"kube-state-metrics-0\" (UID: \"32264ced-78d0-432c-8dba-6b312fc09f77\") " pod="openstack/kube-state-metrics-0" Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.613551 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kjfl\" (UniqueName: \"kubernetes.io/projected/32264ced-78d0-432c-8dba-6b312fc09f77-kube-api-access-6kjfl\") pod \"kube-state-metrics-0\" (UID: \"32264ced-78d0-432c-8dba-6b312fc09f77\") " pod="openstack/kube-state-metrics-0" Feb 19 13:28:36 crc kubenswrapper[4861]: I0219 13:28:36.761915 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.069329 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-d4skq"] Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.071185 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.073146 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ghbch" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.075255 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.079651 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q996h"] Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.080728 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.083024 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.092605 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q996h"] Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.107632 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d4skq"] Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120215 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97eefa3e-8d45-46c5-bfa6-150d0255a15b-scripts\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120295 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-log\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120317 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45bf5fa-a71c-4221-89a9-9c4965821c63-scripts\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120337 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-combined-ca-bundle\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120357 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-log-ovn\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120374 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-etc-ovs\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120441 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run-ovn\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120463 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5mw\" (UniqueName: \"kubernetes.io/projected/97eefa3e-8d45-46c5-bfa6-150d0255a15b-kube-api-access-8g5mw\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-run\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120526 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxpf\" (UniqueName: \"kubernetes.io/projected/c45bf5fa-a71c-4221-89a9-9c4965821c63-kube-api-access-flxpf\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120565 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-ovn-controller-tls-certs\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.120585 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-lib\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.221877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.222203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxpf\" (UniqueName: \"kubernetes.io/projected/c45bf5fa-a71c-4221-89a9-9c4965821c63-kube-api-access-flxpf\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.222361 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-ovn-controller-tls-certs\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223306 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-lib\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223492 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97eefa3e-8d45-46c5-bfa6-150d0255a15b-scripts\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-log\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45bf5fa-a71c-4221-89a9-9c4965821c63-scripts\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223840 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-combined-ca-bundle\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223984 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-log-ovn\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.224092 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-etc-ovs\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.224183 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run-ovn\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.224284 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5mw\" (UniqueName: \"kubernetes.io/projected/97eefa3e-8d45-46c5-bfa6-150d0255a15b-kube-api-access-8g5mw\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.224397 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-run\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.224624 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-run\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.223519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-lib\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.225002 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-log\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.222715 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.226689 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-etc-ovs\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.227168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run-ovn\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.227223 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-log-ovn\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.231468 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-combined-ca-bundle\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.234052 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97eefa3e-8d45-46c5-bfa6-150d0255a15b-scripts\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.241119 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-ovn-controller-tls-certs\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.243713 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5mw\" (UniqueName: \"kubernetes.io/projected/97eefa3e-8d45-46c5-bfa6-150d0255a15b-kube-api-access-8g5mw\") pod \"ovn-controller-q996h\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " pod="openstack/ovn-controller-q996h" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.244721 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45bf5fa-a71c-4221-89a9-9c4965821c63-scripts\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.248503 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxpf\" (UniqueName: \"kubernetes.io/projected/c45bf5fa-a71c-4221-89a9-9c4965821c63-kube-api-access-flxpf\") pod \"ovn-controller-ovs-d4skq\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.392477 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:28:39 crc kubenswrapper[4861]: I0219 13:28:39.401438 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.961411 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.962652 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.965248 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.965585 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.965752 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.967917 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.971455 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9rddx" Feb 19 13:28:40 crc kubenswrapper[4861]: I0219 13:28:40.988605 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.154803 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.154859 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.154928 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.154966 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-config\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.154982 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.155003 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.155087 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.155107 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdpg\" (UniqueName: \"kubernetes.io/projected/2c645ced-1599-4f62-ab9b-0e109a7e02c3-kube-api-access-fvdpg\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257197 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdpg\" (UniqueName: \"kubernetes.io/projected/2c645ced-1599-4f62-ab9b-0e109a7e02c3-kube-api-access-fvdpg\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257286 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257307 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257379 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257411 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-config\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257459 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257574 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.257594 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.258152 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.258890 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.258997 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-config\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.264167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.274966 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdpg\" (UniqueName: \"kubernetes.io/projected/2c645ced-1599-4f62-ab9b-0e109a7e02c3-kube-api-access-fvdpg\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.275378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.278551 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.288594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:41 crc kubenswrapper[4861]: I0219 13:28:41.588185 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.340745 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.347740 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.348819 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.351345 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.351359 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-q4cpc" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.351478 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.358786 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508458 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508541 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508610 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb64k\" (UniqueName: \"kubernetes.io/projected/8a4affa6-9b49-416a-9887-fdffab32916c-kube-api-access-rb64k\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508694 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-config\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508730 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508784 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508829 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.508927 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610326 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb64k\" (UniqueName: \"kubernetes.io/projected/8a4affa6-9b49-416a-9887-fdffab32916c-kube-api-access-rb64k\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610452 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-config\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610484 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610558 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610657 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.610683 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.611788 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.611941 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-config\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.612467 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.613278 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.617527 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.620152 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.635483 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.646048 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb64k\" (UniqueName: \"kubernetes.io/projected/8a4affa6-9b49-416a-9887-fdffab32916c-kube-api-access-rb64k\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.649891 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:43 crc kubenswrapper[4861]: I0219 13:28:43.672734 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.734541 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.735530 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b47mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-67ff45466c-2k898_openstack(7fe31df8-282b-447b-956e-7e6ae9b9d52b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.736904 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-67ff45466c-2k898" podUID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.802718 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.802984 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lj2z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fcf94d689-qvbz9_openstack(e4865151-bee3-4d02-bb44-0ba62bdbb741): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.805087 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" podUID="e4865151-bee3-4d02-bb44-0ba62bdbb741" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.830616 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.831135 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlbjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-855cbc58c5-2xjnq_openstack(8f05a7b3-b044-4971-9805-0fc1860f53de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.832606 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.832653 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" podUID="8f05a7b3-b044-4971-9805-0fc1860f53de" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.832721 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x24kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-f54874ffc-g5rq8_openstack(8b058250-c5d7-4028-a1d0-79071989d140): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:28:53 crc kubenswrapper[4861]: E0219 13:28:53.833856 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" podUID="8b058250-c5d7-4028-a1d0-79071989d140" Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.337457 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 13:28:54 crc kubenswrapper[4861]: W0219 13:28:54.346996 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf079df7d_6aa6_4eab_8a9a_3b4bc329f139.slice/crio-6bcb8cd776167ad0a33fa07775206ef7def4af61dd4bd9d345374e0532f7f2d3 WatchSource:0}: Error finding container 6bcb8cd776167ad0a33fa07775206ef7def4af61dd4bd9d345374e0532f7f2d3: Status 404 returned error can't find the container with id 6bcb8cd776167ad0a33fa07775206ef7def4af61dd4bd9d345374e0532f7f2d3 Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.458764 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:28:54 crc kubenswrapper[4861]: W0219 13:28:54.459907 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a4affa6_9b49_416a_9887_fdffab32916c.slice/crio-e039c4b138a6eaa12993206a39ae44ea78fb1fb90f122fca0898f1c84cfbbc18 WatchSource:0}: Error finding container e039c4b138a6eaa12993206a39ae44ea78fb1fb90f122fca0898f1c84cfbbc18: Status 404 returned error can't find the container with id e039c4b138a6eaa12993206a39ae44ea78fb1fb90f122fca0898f1c84cfbbc18 Feb 19 13:28:54 crc kubenswrapper[4861]: W0219 13:28:54.531944 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32264ced_78d0_432c_8dba_6b312fc09f77.slice/crio-57d910cb094ac9896f74ae53dabe0533e9460862653d2ae68745d105c8885e01 WatchSource:0}: Error finding container 57d910cb094ac9896f74ae53dabe0533e9460862653d2ae68745d105c8885e01: Status 404 returned error can't find the container with id 57d910cb094ac9896f74ae53dabe0533e9460862653d2ae68745d105c8885e01 Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.532336 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:28:54 crc kubenswrapper[4861]: W0219 13:28:54.533613 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77e9ae58_534e_4312_8b56_9ec6708995ac.slice/crio-584a68e70a2756c2c1678bfc7cb30681d2302a97248f3904198716c0a85800fb WatchSource:0}: Error finding container 584a68e70a2756c2c1678bfc7cb30681d2302a97248f3904198716c0a85800fb: Status 404 returned error can't find the container with id 584a68e70a2756c2c1678bfc7cb30681d2302a97248f3904198716c0a85800fb Feb 19 13:28:54 crc kubenswrapper[4861]: W0219 13:28:54.543307 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97eefa3e_8d45_46c5_bfa6_150d0255a15b.slice/crio-bfeedc322a9020738eee629141a95b7b04da979a31848d302d2fe0527c8906f5 WatchSource:0}: Error finding container bfeedc322a9020738eee629141a95b7b04da979a31848d302d2fe0527c8906f5: Status 404 returned error can't find the container with id bfeedc322a9020738eee629141a95b7b04da979a31848d302d2fe0527c8906f5 Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.544790 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:28:54 crc kubenswrapper[4861]: W0219 13:28:54.549531 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc881f3a1_3450_4ca9_8e8a_1c3d67e46770.slice/crio-14feff8e8b18cb35665e312e89f35a358158af874f735695bbf9133b4dc5ef5f WatchSource:0}: Error finding container 14feff8e8b18cb35665e312e89f35a358158af874f735695bbf9133b4dc5ef5f: Status 404 returned error can't find the container with id 14feff8e8b18cb35665e312e89f35a358158af874f735695bbf9133b4dc5ef5f Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.558694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q996h"] Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.571114 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.629374 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d4skq"] Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.715873 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h" event={"ID":"97eefa3e-8d45-46c5-bfa6-150d0255a15b","Type":"ContainerStarted","Data":"bfeedc322a9020738eee629141a95b7b04da979a31848d302d2fe0527c8906f5"} Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.717213 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a4affa6-9b49-416a-9887-fdffab32916c","Type":"ContainerStarted","Data":"e039c4b138a6eaa12993206a39ae44ea78fb1fb90f122fca0898f1c84cfbbc18"} Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.718556 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"32264ced-78d0-432c-8dba-6b312fc09f77","Type":"ContainerStarted","Data":"57d910cb094ac9896f74ae53dabe0533e9460862653d2ae68745d105c8885e01"} Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.719394 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"77e9ae58-534e-4312-8b56-9ec6708995ac","Type":"ContainerStarted","Data":"584a68e70a2756c2c1678bfc7cb30681d2302a97248f3904198716c0a85800fb"} Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.720564 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f079df7d-6aa6-4eab-8a9a-3b4bc329f139","Type":"ContainerStarted","Data":"6bcb8cd776167ad0a33fa07775206ef7def4af61dd4bd9d345374e0532f7f2d3"} Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.722674 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c881f3a1-3450-4ca9-8e8a-1c3d67e46770","Type":"ContainerStarted","Data":"14feff8e8b18cb35665e312e89f35a358158af874f735695bbf9133b4dc5ef5f"} Feb 19 13:28:54 crc kubenswrapper[4861]: E0219 13:28:54.724966 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2\\\"\"" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" podUID="8b058250-c5d7-4028-a1d0-79071989d140" Feb 19 13:28:54 crc kubenswrapper[4861]: E0219 13:28:54.725279 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2\\\"\"" pod="openstack/dnsmasq-dns-67ff45466c-2k898" podUID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" Feb 19 13:28:54 crc kubenswrapper[4861]: I0219 13:28:54.776837 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.118411 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.123371 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.217573 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f05a7b3-b044-4971-9805-0fc1860f53de-config\") pod \"8f05a7b3-b044-4971-9805-0fc1860f53de\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.217618 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2z6\" (UniqueName: \"kubernetes.io/projected/e4865151-bee3-4d02-bb44-0ba62bdbb741-kube-api-access-lj2z6\") pod \"e4865151-bee3-4d02-bb44-0ba62bdbb741\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.217827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-config\") pod \"e4865151-bee3-4d02-bb44-0ba62bdbb741\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.217899 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-dns-svc\") pod \"e4865151-bee3-4d02-bb44-0ba62bdbb741\" (UID: \"e4865151-bee3-4d02-bb44-0ba62bdbb741\") " Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.217924 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlbjs\" (UniqueName: \"kubernetes.io/projected/8f05a7b3-b044-4971-9805-0fc1860f53de-kube-api-access-tlbjs\") pod \"8f05a7b3-b044-4971-9805-0fc1860f53de\" (UID: \"8f05a7b3-b044-4971-9805-0fc1860f53de\") " Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.218043 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f05a7b3-b044-4971-9805-0fc1860f53de-config" (OuterVolumeSpecName: "config") pod "8f05a7b3-b044-4971-9805-0fc1860f53de" (UID: "8f05a7b3-b044-4971-9805-0fc1860f53de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.218266 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f05a7b3-b044-4971-9805-0fc1860f53de-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.218347 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-config" (OuterVolumeSpecName: "config") pod "e4865151-bee3-4d02-bb44-0ba62bdbb741" (UID: "e4865151-bee3-4d02-bb44-0ba62bdbb741"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.219402 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4865151-bee3-4d02-bb44-0ba62bdbb741" (UID: "e4865151-bee3-4d02-bb44-0ba62bdbb741"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.223483 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4865151-bee3-4d02-bb44-0ba62bdbb741-kube-api-access-lj2z6" (OuterVolumeSpecName: "kube-api-access-lj2z6") pod "e4865151-bee3-4d02-bb44-0ba62bdbb741" (UID: "e4865151-bee3-4d02-bb44-0ba62bdbb741"). InnerVolumeSpecName "kube-api-access-lj2z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.223683 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f05a7b3-b044-4971-9805-0fc1860f53de-kube-api-access-tlbjs" (OuterVolumeSpecName: "kube-api-access-tlbjs") pod "8f05a7b3-b044-4971-9805-0fc1860f53de" (UID: "8f05a7b3-b044-4971-9805-0fc1860f53de"). InnerVolumeSpecName "kube-api-access-tlbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.319723 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2z6\" (UniqueName: \"kubernetes.io/projected/e4865151-bee3-4d02-bb44-0ba62bdbb741-kube-api-access-lj2z6\") on node \"crc\" DevicePath \"\"" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.319754 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.319767 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4865151-bee3-4d02-bb44-0ba62bdbb741-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.319777 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlbjs\" (UniqueName: \"kubernetes.io/projected/8f05a7b3-b044-4971-9805-0fc1860f53de-kube-api-access-tlbjs\") on node \"crc\" DevicePath \"\"" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.735712 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c645ced-1599-4f62-ab9b-0e109a7e02c3","Type":"ContainerStarted","Data":"fb8b9a12794a41cc360324059eea31d8acab394a4b5ab59bcaf457ba030ee34c"} Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.737566 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" event={"ID":"e4865151-bee3-4d02-bb44-0ba62bdbb741","Type":"ContainerDied","Data":"5757e1a7ffa3a4df5db8f241438d3ff432c6d49a385db2db5f24b2809af6dd44"} Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.737708 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-qvbz9" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.741243 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.741258 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-2xjnq" event={"ID":"8f05a7b3-b044-4971-9805-0fc1860f53de","Type":"ContainerDied","Data":"4d15e824f4badd0eca48fee43b892f691a12fff7c7c9f90f29aefc2127ece97a"} Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.743383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b117524a-eaad-4666-9e0e-bda909b2ad30","Type":"ContainerStarted","Data":"f11100b3d10e0ed10dbc1ccc95f8c840822253bd67ae1be0a2829b7c7e5404fc"} Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.748179 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerStarted","Data":"9c13e4e820ac547e4e25564c211260e299ba982a1a0f89ed70226daf4a58ecdc"} Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.749445 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe64a04b-1266-4b02-88e5-191f4a974422","Type":"ContainerStarted","Data":"3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba"} Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.826352 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2xjnq"] Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.837356 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-2xjnq"] Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.851828 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-qvbz9"] Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.857486 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-qvbz9"] Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.987616 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f05a7b3-b044-4971-9805-0fc1860f53de" path="/var/lib/kubelet/pods/8f05a7b3-b044-4971-9805-0fc1860f53de/volumes" Feb 19 13:28:55 crc kubenswrapper[4861]: I0219 13:28:55.988023 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4865151-bee3-4d02-bb44-0ba62bdbb741" path="/var/lib/kubelet/pods/e4865151-bee3-4d02-bb44-0ba62bdbb741/volumes" Feb 19 13:29:01 crc kubenswrapper[4861]: I0219 13:29:01.797038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"77e9ae58-534e-4312-8b56-9ec6708995ac","Type":"ContainerStarted","Data":"c8f977cef3638a4d742a835ffdc3bb1d6e0f1071f24a76888be477940c78db5d"} Feb 19 13:29:01 crc kubenswrapper[4861]: I0219 13:29:01.806859 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f079df7d-6aa6-4eab-8a9a-3b4bc329f139","Type":"ContainerStarted","Data":"ad8511fcc645d0be3764a7766fb81bdc9a4303a984191f45091f54cbd03e02b4"} Feb 19 13:29:01 crc kubenswrapper[4861]: I0219 13:29:01.807029 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 13:29:01 crc kubenswrapper[4861]: I0219 13:29:01.846293 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.319531581 podStartE2EDuration="27.846269064s" podCreationTimestamp="2026-02-19 13:28:34 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.351041395 +0000 UTC m=+1149.012144623" lastFinishedPulling="2026-02-19 13:29:00.877778868 +0000 UTC m=+1155.538882106" observedRunningTime="2026-02-19 13:29:01.844333982 +0000 UTC m=+1156.505437210" watchObservedRunningTime="2026-02-19 13:29:01.846269064 +0000 UTC m=+1156.507372292" Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.817060 4861 generic.go:334] "Generic (PLEG): container finished" podID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerID="0eefac32fae3e0435064673967ea6026f7e8fe88c872f6e36851a8fcecdf988a" exitCode=0 Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.817180 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerDied","Data":"0eefac32fae3e0435064673967ea6026f7e8fe88c872f6e36851a8fcecdf988a"} Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.819879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"32264ced-78d0-432c-8dba-6b312fc09f77","Type":"ContainerStarted","Data":"62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9"} Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.820003 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.823787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c645ced-1599-4f62-ab9b-0e109a7e02c3","Type":"ContainerStarted","Data":"295a17d0f366d47e60a15aeae0d2bc62d87b6fb0521772cb5ddf7947d3727a19"} Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.826558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c881f3a1-3450-4ca9-8e8a-1c3d67e46770","Type":"ContainerStarted","Data":"ccf27b55dfd42fdbc55ee84ec26baa141b951ad0cc757ea09a8845bbb6d89325"} Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.828412 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h" event={"ID":"97eefa3e-8d45-46c5-bfa6-150d0255a15b","Type":"ContainerStarted","Data":"1fe2bd69f8790f32fe3ed2c80f24fe603fd6477d505ed84850d75392aec160f8"} Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.828639 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q996h" Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.830552 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a4affa6-9b49-416a-9887-fdffab32916c","Type":"ContainerStarted","Data":"5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8"} Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.862956 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q996h" podStartSLOduration=17.443139747 podStartE2EDuration="23.861675898s" podCreationTimestamp="2026-02-19 13:28:39 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.550981383 +0000 UTC m=+1149.212084601" lastFinishedPulling="2026-02-19 13:29:00.969517514 +0000 UTC m=+1155.630620752" observedRunningTime="2026-02-19 13:29:02.861488622 +0000 UTC m=+1157.522591870" watchObservedRunningTime="2026-02-19 13:29:02.861675898 +0000 UTC m=+1157.522779126" Feb 19 13:29:02 crc kubenswrapper[4861]: I0219 13:29:02.916884 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.767194526 podStartE2EDuration="26.916838137s" podCreationTimestamp="2026-02-19 13:28:36 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.53460401 +0000 UTC m=+1149.195707238" lastFinishedPulling="2026-02-19 13:29:01.684247621 +0000 UTC m=+1156.345350849" observedRunningTime="2026-02-19 13:29:02.910153256 +0000 UTC m=+1157.571256484" watchObservedRunningTime="2026-02-19 13:29:02.916838137 +0000 UTC m=+1157.577941365" Feb 19 13:29:03 crc kubenswrapper[4861]: I0219 13:29:03.844030 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a4affa6-9b49-416a-9887-fdffab32916c","Type":"ContainerStarted","Data":"4ee4dd2ad3f07f52f88fc12ff7db447e44a9a47a1087d34bbe4cd277664e32c4"} Feb 19 13:29:03 crc kubenswrapper[4861]: I0219 13:29:03.854710 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerStarted","Data":"a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358"} Feb 19 13:29:03 crc kubenswrapper[4861]: I0219 13:29:03.857352 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c645ced-1599-4f62-ab9b-0e109a7e02c3","Type":"ContainerStarted","Data":"0285cd049de2db17e1e6886951ab9077d608a79a2d4f313cbf4d2ecae7759cb4"} Feb 19 13:29:03 crc kubenswrapper[4861]: I0219 13:29:03.875880 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.891546417 podStartE2EDuration="21.875863418s" podCreationTimestamp="2026-02-19 13:28:42 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.463188422 +0000 UTC m=+1149.124291650" lastFinishedPulling="2026-02-19 13:29:03.447505423 +0000 UTC m=+1158.108608651" observedRunningTime="2026-02-19 13:29:03.871704275 +0000 UTC m=+1158.532807533" watchObservedRunningTime="2026-02-19 13:29:03.875863418 +0000 UTC m=+1158.536966646" Feb 19 13:29:03 crc kubenswrapper[4861]: I0219 13:29:03.904007 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.275220696 podStartE2EDuration="24.903986527s" podCreationTimestamp="2026-02-19 13:28:39 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.822079602 +0000 UTC m=+1149.483182840" lastFinishedPulling="2026-02-19 13:29:03.450845443 +0000 UTC m=+1158.111948671" observedRunningTime="2026-02-19 13:29:03.894220903 +0000 UTC m=+1158.555324131" watchObservedRunningTime="2026-02-19 13:29:03.903986527 +0000 UTC m=+1158.565089765" Feb 19 13:29:04 crc kubenswrapper[4861]: I0219 13:29:04.674096 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 13:29:04 crc kubenswrapper[4861]: I0219 13:29:04.737236 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 13:29:04 crc kubenswrapper[4861]: I0219 13:29:04.875755 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerStarted","Data":"2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a"} Feb 19 13:29:04 crc kubenswrapper[4861]: I0219 13:29:04.876644 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.588381 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.639968 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.684381 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-d4skq" podStartSLOduration=20.417544148 podStartE2EDuration="26.684343243s" podCreationTimestamp="2026-02-19 13:28:39 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.805862644 +0000 UTC m=+1149.466965872" lastFinishedPulling="2026-02-19 13:29:01.072661729 +0000 UTC m=+1155.733764967" observedRunningTime="2026-02-19 13:29:04.909752869 +0000 UTC m=+1159.570856177" watchObservedRunningTime="2026-02-19 13:29:05.684343243 +0000 UTC m=+1160.345446501" Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.892547 4861 generic.go:334] "Generic (PLEG): container finished" podID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerID="ccf27b55dfd42fdbc55ee84ec26baa141b951ad0cc757ea09a8845bbb6d89325" exitCode=0 Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.892620 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c881f3a1-3450-4ca9-8e8a-1c3d67e46770","Type":"ContainerDied","Data":"ccf27b55dfd42fdbc55ee84ec26baa141b951ad0cc757ea09a8845bbb6d89325"} Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.904229 4861 generic.go:334] "Generic (PLEG): container finished" podID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerID="c8f977cef3638a4d742a835ffdc3bb1d6e0f1071f24a76888be477940c78db5d" exitCode=0 Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.904966 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"77e9ae58-534e-4312-8b56-9ec6708995ac","Type":"ContainerDied","Data":"c8f977cef3638a4d742a835ffdc3bb1d6e0f1071f24a76888be477940c78db5d"} Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.905058 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.905147 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:29:05 crc kubenswrapper[4861]: I0219 13:29:05.905597 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.675092 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.770898 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.915335 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"77e9ae58-534e-4312-8b56-9ec6708995ac","Type":"ContainerStarted","Data":"362173b173d50713e668b40267cc089e3476fe407aa787eab8629d823b7bab2c"} Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.918002 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c881f3a1-3450-4ca9-8e8a-1c3d67e46770","Type":"ContainerStarted","Data":"98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324"} Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.945615 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-g5rq8"] Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.963405 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.355224092 podStartE2EDuration="34.963387422s" podCreationTimestamp="2026-02-19 13:28:32 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.536596565 +0000 UTC m=+1149.197699793" lastFinishedPulling="2026-02-19 13:29:01.144759895 +0000 UTC m=+1155.805863123" observedRunningTime="2026-02-19 13:29:06.950507645 +0000 UTC m=+1161.611610883" watchObservedRunningTime="2026-02-19 13:29:06.963387422 +0000 UTC m=+1161.624490650" Feb 19 13:29:06 crc kubenswrapper[4861]: I0219 13:29:06.992674 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.47148168 podStartE2EDuration="35.992656513s" podCreationTimestamp="2026-02-19 13:28:31 +0000 UTC" firstStartedPulling="2026-02-19 13:28:54.551563188 +0000 UTC m=+1149.212666416" lastFinishedPulling="2026-02-19 13:29:01.072737981 +0000 UTC m=+1155.733841249" observedRunningTime="2026-02-19 13:29:06.981882041 +0000 UTC m=+1161.642985269" watchObservedRunningTime="2026-02-19 13:29:06.992656513 +0000 UTC m=+1161.653759741" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.022494 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-bx56r"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.023761 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.027346 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.047612 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8bt78"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.050178 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.052352 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.068368 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-bx56r"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.088469 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8bt78"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.248608 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249057 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovn-rundir\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249145 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-combined-ca-bundle\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249196 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249225 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-dns-svc\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249257 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gps6p\" (UniqueName: \"kubernetes.io/projected/4b32760c-3d0c-4a06-9df4-63503fae0955-kube-api-access-gps6p\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249398 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovs-rundir\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249487 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjj9s\" (UniqueName: \"kubernetes.io/projected/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-kube-api-access-cjj9s\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249628 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-config\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.249681 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-config\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.318262 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-2k898"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351584 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-config\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351643 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-config\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351727 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351750 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovn-rundir\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351781 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-combined-ca-bundle\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351830 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-dns-svc\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.351855 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gps6p\" (UniqueName: \"kubernetes.io/projected/4b32760c-3d0c-4a06-9df4-63503fae0955-kube-api-access-gps6p\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.352108 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovn-rundir\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.353061 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-dns-svc\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.353052 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-config\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.353231 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.353666 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-config\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.353711 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovs-rundir\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.353816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjj9s\" (UniqueName: \"kubernetes.io/projected/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-kube-api-access-cjj9s\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.354453 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovs-rundir\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.354487 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-rwzx6"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.362280 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-combined-ca-bundle\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.364969 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.383158 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.383340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjj9s\" (UniqueName: \"kubernetes.io/projected/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-kube-api-access-cjj9s\") pod \"ovn-controller-metrics-8bt78\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.383558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gps6p\" (UniqueName: \"kubernetes.io/projected/4b32760c-3d0c-4a06-9df4-63503fae0955-kube-api-access-gps6p\") pod \"dnsmasq-dns-57bdd75c-bx56r\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.385646 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.390954 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.391351 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-rwzx6"] Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.445710 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.456013 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.456060 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnv59\" (UniqueName: \"kubernetes.io/projected/7b3203a6-ea06-471a-92b8-93634f2099c0-kube-api-access-pnv59\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.456094 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-config\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.456120 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.456154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558026 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-config\") pod \"8b058250-c5d7-4028-a1d0-79071989d140\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558238 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x24kf\" (UniqueName: \"kubernetes.io/projected/8b058250-c5d7-4028-a1d0-79071989d140-kube-api-access-x24kf\") pod \"8b058250-c5d7-4028-a1d0-79071989d140\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-dns-svc\") pod \"8b058250-c5d7-4028-a1d0-79071989d140\" (UID: \"8b058250-c5d7-4028-a1d0-79071989d140\") " Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558716 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558749 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnv59\" (UniqueName: \"kubernetes.io/projected/7b3203a6-ea06-471a-92b8-93634f2099c0-kube-api-access-pnv59\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-config\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558803 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.558838 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.559031 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b058250-c5d7-4028-a1d0-79071989d140" (UID: "8b058250-c5d7-4028-a1d0-79071989d140"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.559023 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-config" (OuterVolumeSpecName: "config") pod "8b058250-c5d7-4028-a1d0-79071989d140" (UID: "8b058250-c5d7-4028-a1d0-79071989d140"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.559874 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.560182 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.561353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.561753 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-config\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.564535 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b058250-c5d7-4028-a1d0-79071989d140-kube-api-access-x24kf" (OuterVolumeSpecName: "kube-api-access-x24kf") pod "8b058250-c5d7-4028-a1d0-79071989d140" (UID: "8b058250-c5d7-4028-a1d0-79071989d140"). InnerVolumeSpecName "kube-api-access-x24kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.579462 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnv59\" (UniqueName: \"kubernetes.io/projected/7b3203a6-ea06-471a-92b8-93634f2099c0-kube-api-access-pnv59\") pod \"dnsmasq-dns-75b7bcc64f-rwzx6\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.664241 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x24kf\" (UniqueName: \"kubernetes.io/projected/8b058250-c5d7-4028-a1d0-79071989d140-kube-api-access-x24kf\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.664284 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.664296 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b058250-c5d7-4028-a1d0-79071989d140-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.680359 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.758292 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.880321 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8bt78"] Feb 19 13:29:07 crc kubenswrapper[4861]: W0219 13:29:07.888875 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99cc42c8_6836_4d0e_8d13_6b0a44b2583f.slice/crio-b141b0c9e6aa947ddd6e7a375e650503f18da8815fc88e2a8857a8ce200f2e1b WatchSource:0}: Error finding container b141b0c9e6aa947ddd6e7a375e650503f18da8815fc88e2a8857a8ce200f2e1b: Status 404 returned error can't find the container with id b141b0c9e6aa947ddd6e7a375e650503f18da8815fc88e2a8857a8ce200f2e1b Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.948198 4861 generic.go:334] "Generic (PLEG): container finished" podID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" containerID="7b323c85ca183f1b3c06293d7d9752850390fc73353b4f0c10961b57c79cb251" exitCode=0 Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.948331 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-2k898" event={"ID":"7fe31df8-282b-447b-956e-7e6ae9b9d52b","Type":"ContainerDied","Data":"7b323c85ca183f1b3c06293d7d9752850390fc73353b4f0c10961b57c79cb251"} Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.955399 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8bt78" event={"ID":"99cc42c8-6836-4d0e-8d13-6b0a44b2583f","Type":"ContainerStarted","Data":"b141b0c9e6aa947ddd6e7a375e650503f18da8815fc88e2a8857a8ce200f2e1b"} Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.959262 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" Feb 19 13:29:07 crc kubenswrapper[4861]: I0219 13:29:07.964202 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f54874ffc-g5rq8" event={"ID":"8b058250-c5d7-4028-a1d0-79071989d140","Type":"ContainerDied","Data":"55a19fe1a69d04f98b3636ec5a31c6a9d523276ab7c8abcea941d23b74c7c0a6"} Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.051293 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-g5rq8"] Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.066357 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-g5rq8"] Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.125972 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-bx56r"] Feb 19 13:29:08 crc kubenswrapper[4861]: W0219 13:29:08.130039 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b32760c_3d0c_4a06_9df4_63503fae0955.slice/crio-db86f5d7c03ce7b67b230e38bc127d5d525c255d0f4f1d8a02bcbd85cc0b6f98 WatchSource:0}: Error finding container db86f5d7c03ce7b67b230e38bc127d5d525c255d0f4f1d8a02bcbd85cc0b6f98: Status 404 returned error can't find the container with id db86f5d7c03ce7b67b230e38bc127d5d525c255d0f4f1d8a02bcbd85cc0b6f98 Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.237988 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-rwzx6"] Feb 19 13:29:08 crc kubenswrapper[4861]: W0219 13:29:08.253764 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b3203a6_ea06_471a_92b8_93634f2099c0.slice/crio-74fc72627de99ca68f6fea6ae80239987af542480b2d9a27dfca45de48356814 WatchSource:0}: Error finding container 74fc72627de99ca68f6fea6ae80239987af542480b2d9a27dfca45de48356814: Status 404 returned error can't find the container with id 74fc72627de99ca68f6fea6ae80239987af542480b2d9a27dfca45de48356814 Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.293066 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.399029 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b47mw\" (UniqueName: \"kubernetes.io/projected/7fe31df8-282b-447b-956e-7e6ae9b9d52b-kube-api-access-b47mw\") pod \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.399597 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-config\") pod \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.399659 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-dns-svc\") pod \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\" (UID: \"7fe31df8-282b-447b-956e-7e6ae9b9d52b\") " Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.405737 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe31df8-282b-447b-956e-7e6ae9b9d52b-kube-api-access-b47mw" (OuterVolumeSpecName: "kube-api-access-b47mw") pod "7fe31df8-282b-447b-956e-7e6ae9b9d52b" (UID: "7fe31df8-282b-447b-956e-7e6ae9b9d52b"). InnerVolumeSpecName "kube-api-access-b47mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.423591 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fe31df8-282b-447b-956e-7e6ae9b9d52b" (UID: "7fe31df8-282b-447b-956e-7e6ae9b9d52b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.425767 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-config" (OuterVolumeSpecName: "config") pod "7fe31df8-282b-447b-956e-7e6ae9b9d52b" (UID: "7fe31df8-282b-447b-956e-7e6ae9b9d52b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.502456 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.502511 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe31df8-282b-447b-956e-7e6ae9b9d52b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.502531 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b47mw\" (UniqueName: \"kubernetes.io/projected/7fe31df8-282b-447b-956e-7e6ae9b9d52b-kube-api-access-b47mw\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.720287 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.950312 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:29:08 crc kubenswrapper[4861]: E0219 13:29:08.950726 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" containerName="init" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.950741 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" containerName="init" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.950951 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" containerName="init" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.951989 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.970250 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.970715 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.970998 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.971134 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xbdt7" Feb 19 13:29:08 crc kubenswrapper[4861]: I0219 13:29:08.989354 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.020845 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8bt78" event={"ID":"99cc42c8-6836-4d0e-8d13-6b0a44b2583f","Type":"ContainerStarted","Data":"9ecb2a5a0a2e89a60f5cb3038c9b421573c3c77bb1ddc2f92c091a4a5d3709ac"} Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.024918 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-2k898" event={"ID":"7fe31df8-282b-447b-956e-7e6ae9b9d52b","Type":"ContainerDied","Data":"7dbd7b61948cd2cf38f2c040d998e60ff1c7c2013ddb6c2c12b0b01b8166cc32"} Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.024997 4861 scope.go:117] "RemoveContainer" containerID="7b323c85ca183f1b3c06293d7d9752850390fc73353b4f0c10961b57c79cb251" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.025256 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-2k898" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.034501 4861 generic.go:334] "Generic (PLEG): container finished" podID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerID="e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab" exitCode=0 Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.035106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" event={"ID":"7b3203a6-ea06-471a-92b8-93634f2099c0","Type":"ContainerDied","Data":"e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab"} Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.035146 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" event={"ID":"7b3203a6-ea06-471a-92b8-93634f2099c0","Type":"ContainerStarted","Data":"74fc72627de99ca68f6fea6ae80239987af542480b2d9a27dfca45de48356814"} Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.079663 4861 generic.go:334] "Generic (PLEG): container finished" podID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerID="7074a5f3b3e156e1b3cde5a1b61c4381d4c27e82fa77395e6f6940c7bfb58607" exitCode=0 Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.079739 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" event={"ID":"4b32760c-3d0c-4a06-9df4-63503fae0955","Type":"ContainerDied","Data":"7074a5f3b3e156e1b3cde5a1b61c4381d4c27e82fa77395e6f6940c7bfb58607"} Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.079774 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" event={"ID":"4b32760c-3d0c-4a06-9df4-63503fae0955","Type":"ContainerStarted","Data":"db86f5d7c03ce7b67b230e38bc127d5d525c255d0f4f1d8a02bcbd85cc0b6f98"} Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158081 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-config\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158121 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-scripts\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158201 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158224 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158249 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.158269 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwnxk\" (UniqueName: \"kubernetes.io/projected/92ee6ab7-feb7-4dbd-881a-b8250652aef9-kube-api-access-qwnxk\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.198820 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8bt78" podStartSLOduration=3.198803472 podStartE2EDuration="3.198803472s" podCreationTimestamp="2026-02-19 13:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:09.103576681 +0000 UTC m=+1163.764679899" watchObservedRunningTime="2026-02-19 13:29:09.198803472 +0000 UTC m=+1163.859906700" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259354 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-config\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259394 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-scripts\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259449 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259515 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259537 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.259558 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwnxk\" (UniqueName: \"kubernetes.io/projected/92ee6ab7-feb7-4dbd-881a-b8250652aef9-kube-api-access-qwnxk\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.262028 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.262310 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-scripts\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.262325 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-config\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.273303 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.273901 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.275331 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.283549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwnxk\" (UniqueName: \"kubernetes.io/projected/92ee6ab7-feb7-4dbd-881a-b8250652aef9-kube-api-access-qwnxk\") pod \"ovn-northd-0\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.287494 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-2k898"] Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.290578 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.296895 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-2k898"] Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.432717 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.830258 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:29:09 crc kubenswrapper[4861]: W0219 13:29:09.832383 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92ee6ab7_feb7_4dbd_881a_b8250652aef9.slice/crio-64069a6ca39e06e805ac89d3b0154e0cdab7ee86b9afa4ae9dc1d783fb0ff1bb WatchSource:0}: Error finding container 64069a6ca39e06e805ac89d3b0154e0cdab7ee86b9afa4ae9dc1d783fb0ff1bb: Status 404 returned error can't find the container with id 64069a6ca39e06e805ac89d3b0154e0cdab7ee86b9afa4ae9dc1d783fb0ff1bb Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.994132 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe31df8-282b-447b-956e-7e6ae9b9d52b" path="/var/lib/kubelet/pods/7fe31df8-282b-447b-956e-7e6ae9b9d52b/volumes" Feb 19 13:29:09 crc kubenswrapper[4861]: I0219 13:29:09.994860 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b058250-c5d7-4028-a1d0-79071989d140" path="/var/lib/kubelet/pods/8b058250-c5d7-4028-a1d0-79071989d140/volumes" Feb 19 13:29:10 crc kubenswrapper[4861]: I0219 13:29:10.093038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" event={"ID":"7b3203a6-ea06-471a-92b8-93634f2099c0","Type":"ContainerStarted","Data":"1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e"} Feb 19 13:29:10 crc kubenswrapper[4861]: I0219 13:29:10.093560 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:10 crc kubenswrapper[4861]: I0219 13:29:10.094108 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ee6ab7-feb7-4dbd-881a-b8250652aef9","Type":"ContainerStarted","Data":"64069a6ca39e06e805ac89d3b0154e0cdab7ee86b9afa4ae9dc1d783fb0ff1bb"} Feb 19 13:29:10 crc kubenswrapper[4861]: I0219 13:29:10.096956 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" event={"ID":"4b32760c-3d0c-4a06-9df4-63503fae0955","Type":"ContainerStarted","Data":"49bb7f38083fac91edbd3b699ab43a27e5f9edb7f15b840c8e290849bb63772c"} Feb 19 13:29:10 crc kubenswrapper[4861]: I0219 13:29:10.118053 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" podStartSLOduration=3.118032208 podStartE2EDuration="3.118032208s" podCreationTimestamp="2026-02-19 13:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:10.117243297 +0000 UTC m=+1164.778346535" watchObservedRunningTime="2026-02-19 13:29:10.118032208 +0000 UTC m=+1164.779135436" Feb 19 13:29:10 crc kubenswrapper[4861]: I0219 13:29:10.140411 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" podStartSLOduration=4.140392502 podStartE2EDuration="4.140392502s" podCreationTimestamp="2026-02-19 13:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:10.136384504 +0000 UTC m=+1164.797487742" watchObservedRunningTime="2026-02-19 13:29:10.140392502 +0000 UTC m=+1164.801495720" Feb 19 13:29:11 crc kubenswrapper[4861]: I0219 13:29:11.108103 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:12 crc kubenswrapper[4861]: I0219 13:29:12.964321 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 13:29:12 crc kubenswrapper[4861]: I0219 13:29:12.964862 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 13:29:14 crc kubenswrapper[4861]: I0219 13:29:14.351280 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 13:29:14 crc kubenswrapper[4861]: I0219 13:29:14.352069 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 13:29:15 crc kubenswrapper[4861]: I0219 13:29:15.536166 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 13:29:15 crc kubenswrapper[4861]: I0219 13:29:15.681021 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.174253 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ee6ab7-feb7-4dbd-881a-b8250652aef9","Type":"ContainerStarted","Data":"37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6"} Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.888221 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-bx56r"] Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.888891 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerName="dnsmasq-dns" containerID="cri-o://49bb7f38083fac91edbd3b699ab43a27e5f9edb7f15b840c8e290849bb63772c" gracePeriod=10 Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.897989 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.923529 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-bjwqj"] Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.924783 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:16 crc kubenswrapper[4861]: I0219 13:29:16.934264 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-bjwqj"] Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.023404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.023882 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.023903 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gxk\" (UniqueName: \"kubernetes.io/projected/638f49e9-45bb-4106-a3bc-a53a23fbc313-kube-api-access-f9gxk\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.023931 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-dns-svc\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.024015 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-config\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.125263 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-config\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.125320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.125356 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.125374 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9gxk\" (UniqueName: \"kubernetes.io/projected/638f49e9-45bb-4106-a3bc-a53a23fbc313-kube-api-access-f9gxk\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.125398 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-dns-svc\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.126245 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-dns-svc\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.126257 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-config\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.126351 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.126555 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.146649 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9gxk\" (UniqueName: \"kubernetes.io/projected/638f49e9-45bb-4106-a3bc-a53a23fbc313-kube-api-access-f9gxk\") pod \"dnsmasq-dns-689df5d84f-bjwqj\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.186268 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ee6ab7-feb7-4dbd-881a-b8250652aef9","Type":"ContainerStarted","Data":"66481bc6acbb5d53d8e31bc7da07ae265932f327cf54cd5cb7411c629205684f"} Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.186411 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.191585 4861 generic.go:334] "Generic (PLEG): container finished" podID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerID="49bb7f38083fac91edbd3b699ab43a27e5f9edb7f15b840c8e290849bb63772c" exitCode=0 Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.191639 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" event={"ID":"4b32760c-3d0c-4a06-9df4-63503fae0955","Type":"ContainerDied","Data":"49bb7f38083fac91edbd3b699ab43a27e5f9edb7f15b840c8e290849bb63772c"} Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.208275 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.189321699 podStartE2EDuration="9.208254722s" podCreationTimestamp="2026-02-19 13:29:08 +0000 UTC" firstStartedPulling="2026-02-19 13:29:09.834988387 +0000 UTC m=+1164.496091605" lastFinishedPulling="2026-02-19 13:29:15.85392139 +0000 UTC m=+1170.515024628" observedRunningTime="2026-02-19 13:29:17.203399551 +0000 UTC m=+1171.864502809" watchObservedRunningTime="2026-02-19 13:29:17.208254722 +0000 UTC m=+1171.869357970" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.367065 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.404054 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.535551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gps6p\" (UniqueName: \"kubernetes.io/projected/4b32760c-3d0c-4a06-9df4-63503fae0955-kube-api-access-gps6p\") pod \"4b32760c-3d0c-4a06-9df4-63503fae0955\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.535947 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-config\") pod \"4b32760c-3d0c-4a06-9df4-63503fae0955\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.535980 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-dns-svc\") pod \"4b32760c-3d0c-4a06-9df4-63503fae0955\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.536104 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-ovsdbserver-nb\") pod \"4b32760c-3d0c-4a06-9df4-63503fae0955\" (UID: \"4b32760c-3d0c-4a06-9df4-63503fae0955\") " Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.544693 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b32760c-3d0c-4a06-9df4-63503fae0955-kube-api-access-gps6p" (OuterVolumeSpecName: "kube-api-access-gps6p") pod "4b32760c-3d0c-4a06-9df4-63503fae0955" (UID: "4b32760c-3d0c-4a06-9df4-63503fae0955"). InnerVolumeSpecName "kube-api-access-gps6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.585169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-config" (OuterVolumeSpecName: "config") pod "4b32760c-3d0c-4a06-9df4-63503fae0955" (UID: "4b32760c-3d0c-4a06-9df4-63503fae0955"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.587004 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b32760c-3d0c-4a06-9df4-63503fae0955" (UID: "4b32760c-3d0c-4a06-9df4-63503fae0955"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.589266 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b32760c-3d0c-4a06-9df4-63503fae0955" (UID: "4b32760c-3d0c-4a06-9df4-63503fae0955"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.639219 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gps6p\" (UniqueName: \"kubernetes.io/projected/4b32760c-3d0c-4a06-9df4-63503fae0955-kube-api-access-gps6p\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.639278 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.639296 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.639310 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b32760c-3d0c-4a06-9df4-63503fae0955-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.759674 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.852250 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-bjwqj"] Feb 19 13:29:17 crc kubenswrapper[4861]: I0219 13:29:17.970717 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.024464 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.024843 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerName="init" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.024860 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerName="init" Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.024878 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerName="dnsmasq-dns" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.024885 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerName="dnsmasq-dns" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.025044 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" containerName="dnsmasq-dns" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.029610 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.032442 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.032639 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.036010 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mzlmq" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.046559 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.060995 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.096234 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.150294 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.150343 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.150486 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.150749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-lock\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.150778 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lnf\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-kube-api-access-f2lnf\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.150848 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-cache\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.234198 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" event={"ID":"4b32760c-3d0c-4a06-9df4-63503fae0955","Type":"ContainerDied","Data":"db86f5d7c03ce7b67b230e38bc127d5d525c255d0f4f1d8a02bcbd85cc0b6f98"} Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.234253 4861 scope.go:117] "RemoveContainer" containerID="49bb7f38083fac91edbd3b699ab43a27e5f9edb7f15b840c8e290849bb63772c" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.234371 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-bx56r" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.251913 4861 generic.go:334] "Generic (PLEG): container finished" podID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerID="214860af9139ec2929a743eb13e1ab9faea93679a7e4f0231949c3c1db191f22" exitCode=0 Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.252477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" event={"ID":"638f49e9-45bb-4106-a3bc-a53a23fbc313","Type":"ContainerDied","Data":"214860af9139ec2929a743eb13e1ab9faea93679a7e4f0231949c3c1db191f22"} Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.252527 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" event={"ID":"638f49e9-45bb-4106-a3bc-a53a23fbc313","Type":"ContainerStarted","Data":"d46d274b3dbdb10e32d9cf12e7ec50461a77d2332244892a0643fd6089bbeedd"} Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.254112 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lnf\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-kube-api-access-f2lnf\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.254665 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-cache\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.254753 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.254793 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.254864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.254923 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-lock\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.256788 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.256841 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.256902 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift podName:f7c9197d-43d5-4c72-a7c3-c2e435368dd2 nodeName:}" failed. No retries permitted until 2026-02-19 13:29:18.756880591 +0000 UTC m=+1173.417983819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift") pod "swift-storage-0" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2") : configmap "swift-ring-files" not found Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.257120 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.257941 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-lock\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.264180 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-cache\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.306476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.306797 4861 scope.go:117] "RemoveContainer" containerID="7074a5f3b3e156e1b3cde5a1b61c4381d4c27e82fa77395e6f6940c7bfb58607" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.319657 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-bx56r"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.334830 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-bx56r"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.382569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lnf\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-kube-api-access-f2lnf\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.434461 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.542881 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gsbzx"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.544264 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.548187 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.548739 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.551467 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.581605 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gsbzx"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.606630 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gsbzx"] Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.608004 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-rpthz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-rpthz ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-gsbzx" podUID="446c4cee-8ea4-4043-ba11-1ae0d08b8073" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.612768 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8c6n5"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.615486 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.638223 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8c6n5"] Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.666503 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/446c4cee-8ea4-4043-ba11-1ae0d08b8073-etc-swift\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.666586 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpthz\" (UniqueName: \"kubernetes.io/projected/446c4cee-8ea4-4043-ba11-1ae0d08b8073-kube-api-access-rpthz\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.666619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-dispersionconf\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.666656 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-swiftconf\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.666770 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-combined-ca-bundle\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.666985 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-scripts\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-ring-data-devices\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667044 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-combined-ca-bundle\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-swiftconf\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667183 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c707a7f2-3143-4979-96e4-23177b810c9e-etc-swift\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667269 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-ring-data-devices\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667365 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2jx\" (UniqueName: \"kubernetes.io/projected/c707a7f2-3143-4979-96e4-23177b810c9e-kube-api-access-ht2jx\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667390 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-scripts\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.667448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-dispersionconf\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.769771 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-ring-data-devices\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.769853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht2jx\" (UniqueName: \"kubernetes.io/projected/c707a7f2-3143-4979-96e4-23177b810c9e-kube-api-access-ht2jx\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.769884 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-scripts\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.769922 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-dispersionconf\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.769957 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/446c4cee-8ea4-4043-ba11-1ae0d08b8073-etc-swift\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.769984 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpthz\" (UniqueName: \"kubernetes.io/projected/446c4cee-8ea4-4043-ba11-1ae0d08b8073-kube-api-access-rpthz\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770011 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-dispersionconf\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770038 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-swiftconf\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770058 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-combined-ca-bundle\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770124 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-scripts\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770145 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-ring-data-devices\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770163 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-combined-ca-bundle\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770219 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-swiftconf\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.770241 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c707a7f2-3143-4979-96e4-23177b810c9e-etc-swift\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.771021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c707a7f2-3143-4979-96e4-23177b810c9e-etc-swift\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.771088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-ring-data-devices\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.772109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-ring-data-devices\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.772216 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-scripts\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.772385 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.772431 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:29:18 crc kubenswrapper[4861]: E0219 13:29:18.772491 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift podName:f7c9197d-43d5-4c72-a7c3-c2e435368dd2 nodeName:}" failed. No retries permitted until 2026-02-19 13:29:19.772470286 +0000 UTC m=+1174.433573714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift") pod "swift-storage-0" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2") : configmap "swift-ring-files" not found Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.772853 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-scripts\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.773496 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/446c4cee-8ea4-4043-ba11-1ae0d08b8073-etc-swift\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.776938 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-swiftconf\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.777274 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-swiftconf\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.777723 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-dispersionconf\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.779780 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-combined-ca-bundle\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.782894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-dispersionconf\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.787226 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-combined-ca-bundle\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.796766 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht2jx\" (UniqueName: \"kubernetes.io/projected/c707a7f2-3143-4979-96e4-23177b810c9e-kube-api-access-ht2jx\") pod \"swift-ring-rebalance-8c6n5\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.801227 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpthz\" (UniqueName: \"kubernetes.io/projected/446c4cee-8ea4-4043-ba11-1ae0d08b8073-kube-api-access-rpthz\") pod \"swift-ring-rebalance-gsbzx\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:18 crc kubenswrapper[4861]: I0219 13:29:18.936020 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.267492 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" event={"ID":"638f49e9-45bb-4106-a3bc-a53a23fbc313","Type":"ContainerStarted","Data":"5cd57e65f88464752a92643ee3e7cc60742238c5855798538a438e99542da273"} Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.268020 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.272523 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.291052 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.293005 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" podStartSLOduration=3.292909982 podStartE2EDuration="3.292909982s" podCreationTimestamp="2026-02-19 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:19.289954872 +0000 UTC m=+1173.951058100" watchObservedRunningTime="2026-02-19 13:29:19.292909982 +0000 UTC m=+1173.954013210" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.384089 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-combined-ca-bundle\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.384741 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/446c4cee-8ea4-4043-ba11-1ae0d08b8073-etc-swift\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.384787 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-ring-data-devices\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.384830 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-dispersionconf\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.384988 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-scripts\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.385116 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446c4cee-8ea4-4043-ba11-1ae0d08b8073-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.385265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-swiftconf\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.385350 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpthz\" (UniqueName: \"kubernetes.io/projected/446c4cee-8ea4-4043-ba11-1ae0d08b8073-kube-api-access-rpthz\") pod \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\" (UID: \"446c4cee-8ea4-4043-ba11-1ae0d08b8073\") " Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.385523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-scripts" (OuterVolumeSpecName: "scripts") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.385792 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.386933 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/446c4cee-8ea4-4043-ba11-1ae0d08b8073-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.386962 4861 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.386979 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446c4cee-8ea4-4043-ba11-1ae0d08b8073-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.391476 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.391985 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446c4cee-8ea4-4043-ba11-1ae0d08b8073-kube-api-access-rpthz" (OuterVolumeSpecName: "kube-api-access-rpthz") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "kube-api-access-rpthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.395682 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.403208 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "446c4cee-8ea4-4043-ba11-1ae0d08b8073" (UID: "446c4cee-8ea4-4043-ba11-1ae0d08b8073"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.461623 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8c6n5"] Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.488519 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpthz\" (UniqueName: \"kubernetes.io/projected/446c4cee-8ea4-4043-ba11-1ae0d08b8073-kube-api-access-rpthz\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.488560 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.488573 4861 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.488584 4861 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/446c4cee-8ea4-4043-ba11-1ae0d08b8073-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.719295 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tl2p4"] Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.721676 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.734711 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tl2p4"] Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.757362 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b129-account-create-update-4fk4r"] Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.763526 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.765905 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.784825 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b129-account-create-update-4fk4r"] Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.794237 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-operator-scripts\") pod \"glance-db-create-tl2p4\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.794309 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tv8m\" (UniqueName: \"kubernetes.io/projected/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-kube-api-access-6tv8m\") pod \"glance-db-create-tl2p4\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.794371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.794410 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjv7t\" (UniqueName: \"kubernetes.io/projected/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-kube-api-access-pjv7t\") pod \"glance-b129-account-create-update-4fk4r\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.794485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-operator-scripts\") pod \"glance-b129-account-create-update-4fk4r\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: E0219 13:29:19.794713 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:29:19 crc kubenswrapper[4861]: E0219 13:29:19.794740 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:29:19 crc kubenswrapper[4861]: E0219 13:29:19.794799 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift podName:f7c9197d-43d5-4c72-a7c3-c2e435368dd2 nodeName:}" failed. No retries permitted until 2026-02-19 13:29:21.794776296 +0000 UTC m=+1176.455879524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift") pod "swift-storage-0" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2") : configmap "swift-ring-files" not found Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.896370 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-operator-scripts\") pod \"glance-b129-account-create-update-4fk4r\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.896536 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-operator-scripts\") pod \"glance-db-create-tl2p4\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.896580 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tv8m\" (UniqueName: \"kubernetes.io/projected/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-kube-api-access-6tv8m\") pod \"glance-db-create-tl2p4\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.896648 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjv7t\" (UniqueName: \"kubernetes.io/projected/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-kube-api-access-pjv7t\") pod \"glance-b129-account-create-update-4fk4r\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.897168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-operator-scripts\") pod \"glance-b129-account-create-update-4fk4r\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.898031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-operator-scripts\") pod \"glance-db-create-tl2p4\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.915525 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjv7t\" (UniqueName: \"kubernetes.io/projected/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-kube-api-access-pjv7t\") pod \"glance-b129-account-create-update-4fk4r\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.915773 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tv8m\" (UniqueName: \"kubernetes.io/projected/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-kube-api-access-6tv8m\") pod \"glance-db-create-tl2p4\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:19 crc kubenswrapper[4861]: I0219 13:29:19.988560 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b32760c-3d0c-4a06-9df4-63503fae0955" path="/var/lib/kubelet/pods/4b32760c-3d0c-4a06-9df4-63503fae0955/volumes" Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.055945 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.094972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.309790 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gsbzx" Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.310401 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8c6n5" event={"ID":"c707a7f2-3143-4979-96e4-23177b810c9e","Type":"ContainerStarted","Data":"fee9e6aa6eecc92d2c371c9644cd5ef5b7081d75d925fb26f19e5f51dee22c09"} Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.372513 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gsbzx"] Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.385670 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-gsbzx"] Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.589844 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tl2p4"] Feb 19 13:29:20 crc kubenswrapper[4861]: W0219 13:29:20.592679 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49adc4e_23bf_4d7c_9c65_6ad99ee77551.slice/crio-18ca569fc5c18dc122cd72f780ee2e573d1c3a1675c80a7f317f359c1474d370 WatchSource:0}: Error finding container 18ca569fc5c18dc122cd72f780ee2e573d1c3a1675c80a7f317f359c1474d370: Status 404 returned error can't find the container with id 18ca569fc5c18dc122cd72f780ee2e573d1c3a1675c80a7f317f359c1474d370 Feb 19 13:29:20 crc kubenswrapper[4861]: I0219 13:29:20.653170 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b129-account-create-update-4fk4r"] Feb 19 13:29:20 crc kubenswrapper[4861]: W0219 13:29:20.676195 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5bc6a04_8a3f_47e4_b3d6_faa4d60914fb.slice/crio-9bb3fe420b2acabee332fa71441fafd1a3d5ac8fe4cb793acd69b1a79faa3551 WatchSource:0}: Error finding container 9bb3fe420b2acabee332fa71441fafd1a3d5ac8fe4cb793acd69b1a79faa3551: Status 404 returned error can't find the container with id 9bb3fe420b2acabee332fa71441fafd1a3d5ac8fe4cb793acd69b1a79faa3551 Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.326595 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8xwhv"] Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.328711 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.332070 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.333157 4861 generic.go:334] "Generic (PLEG): container finished" podID="c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" containerID="d9af03464962f01cce9fb93fd5fe4eb9ac5de0912a4678ebcade5e9fee209e26" exitCode=0 Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.333237 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b129-account-create-update-4fk4r" event={"ID":"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb","Type":"ContainerDied","Data":"d9af03464962f01cce9fb93fd5fe4eb9ac5de0912a4678ebcade5e9fee209e26"} Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.333268 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b129-account-create-update-4fk4r" event={"ID":"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb","Type":"ContainerStarted","Data":"9bb3fe420b2acabee332fa71441fafd1a3d5ac8fe4cb793acd69b1a79faa3551"} Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.335054 4861 generic.go:334] "Generic (PLEG): container finished" podID="a49adc4e-23bf-4d7c-9c65-6ad99ee77551" containerID="23cda0cad8af5ef4ce132ba46d2f07a162307eafe5c9cf3205b337ec990907ac" exitCode=0 Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.335089 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tl2p4" event={"ID":"a49adc4e-23bf-4d7c-9c65-6ad99ee77551","Type":"ContainerDied","Data":"23cda0cad8af5ef4ce132ba46d2f07a162307eafe5c9cf3205b337ec990907ac"} Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.335109 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tl2p4" event={"ID":"a49adc4e-23bf-4d7c-9c65-6ad99ee77551","Type":"ContainerStarted","Data":"18ca569fc5c18dc122cd72f780ee2e573d1c3a1675c80a7f317f359c1474d370"} Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.348394 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8xwhv"] Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.432267 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-operator-scripts\") pod \"root-account-create-update-8xwhv\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.432376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjf4\" (UniqueName: \"kubernetes.io/projected/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-kube-api-access-fdjf4\") pod \"root-account-create-update-8xwhv\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.534322 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-operator-scripts\") pod \"root-account-create-update-8xwhv\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.534414 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjf4\" (UniqueName: \"kubernetes.io/projected/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-kube-api-access-fdjf4\") pod \"root-account-create-update-8xwhv\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.535534 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-operator-scripts\") pod \"root-account-create-update-8xwhv\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.565546 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjf4\" (UniqueName: \"kubernetes.io/projected/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-kube-api-access-fdjf4\") pod \"root-account-create-update-8xwhv\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.649553 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.839810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:21 crc kubenswrapper[4861]: E0219 13:29:21.840435 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:29:21 crc kubenswrapper[4861]: E0219 13:29:21.840458 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:29:21 crc kubenswrapper[4861]: E0219 13:29:21.840502 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift podName:f7c9197d-43d5-4c72-a7c3-c2e435368dd2 nodeName:}" failed. No retries permitted until 2026-02-19 13:29:25.840487138 +0000 UTC m=+1180.501590366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift") pod "swift-storage-0" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2") : configmap "swift-ring-files" not found Feb 19 13:29:21 crc kubenswrapper[4861]: I0219 13:29:21.992555 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446c4cee-8ea4-4043-ba11-1ae0d08b8073" path="/var/lib/kubelet/pods/446c4cee-8ea4-4043-ba11-1ae0d08b8073/volumes" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.229989 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.238397 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.266407 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjv7t\" (UniqueName: \"kubernetes.io/projected/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-kube-api-access-pjv7t\") pod \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.266906 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-operator-scripts\") pod \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\" (UID: \"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb\") " Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.272005 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" (UID: "c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.279687 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-kube-api-access-pjv7t" (OuterVolumeSpecName: "kube-api-access-pjv7t") pod "c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" (UID: "c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb"). InnerVolumeSpecName "kube-api-access-pjv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.354141 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b129-account-create-update-4fk4r" event={"ID":"c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb","Type":"ContainerDied","Data":"9bb3fe420b2acabee332fa71441fafd1a3d5ac8fe4cb793acd69b1a79faa3551"} Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.354201 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb3fe420b2acabee332fa71441fafd1a3d5ac8fe4cb793acd69b1a79faa3551" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.354205 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-4fk4r" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.356163 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8c6n5" event={"ID":"c707a7f2-3143-4979-96e4-23177b810c9e","Type":"ContainerStarted","Data":"5f4ea76f6f5a358df34a0fc0c42414fb4768cb3d35df370d01f4bb513bb89a5d"} Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.358641 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tl2p4" event={"ID":"a49adc4e-23bf-4d7c-9c65-6ad99ee77551","Type":"ContainerDied","Data":"18ca569fc5c18dc122cd72f780ee2e573d1c3a1675c80a7f317f359c1474d370"} Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.358804 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ca569fc5c18dc122cd72f780ee2e573d1c3a1675c80a7f317f359c1474d370" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.358973 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tl2p4" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.372984 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-operator-scripts\") pod \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.373140 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tv8m\" (UniqueName: \"kubernetes.io/projected/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-kube-api-access-6tv8m\") pod \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\" (UID: \"a49adc4e-23bf-4d7c-9c65-6ad99ee77551\") " Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.373967 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjv7t\" (UniqueName: \"kubernetes.io/projected/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-kube-api-access-pjv7t\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.373988 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.376654 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a49adc4e-23bf-4d7c-9c65-6ad99ee77551" (UID: "a49adc4e-23bf-4d7c-9c65-6ad99ee77551"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.379868 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-kube-api-access-6tv8m" (OuterVolumeSpecName: "kube-api-access-6tv8m") pod "a49adc4e-23bf-4d7c-9c65-6ad99ee77551" (UID: "a49adc4e-23bf-4d7c-9c65-6ad99ee77551"). InnerVolumeSpecName "kube-api-access-6tv8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.380058 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8c6n5" podStartSLOduration=1.770673027 podStartE2EDuration="5.380026518s" podCreationTimestamp="2026-02-19 13:29:18 +0000 UTC" firstStartedPulling="2026-02-19 13:29:19.478285943 +0000 UTC m=+1174.139389171" lastFinishedPulling="2026-02-19 13:29:23.087639434 +0000 UTC m=+1177.748742662" observedRunningTime="2026-02-19 13:29:23.378973989 +0000 UTC m=+1178.040077227" watchObservedRunningTime="2026-02-19 13:29:23.380026518 +0000 UTC m=+1178.041129746" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.476107 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.476132 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tv8m\" (UniqueName: \"kubernetes.io/projected/a49adc4e-23bf-4d7c-9c65-6ad99ee77551-kube-api-access-6tv8m\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:23 crc kubenswrapper[4861]: I0219 13:29:23.568658 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8xwhv"] Feb 19 13:29:23 crc kubenswrapper[4861]: W0219 13:29:23.573497 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa683ceb_d0ee_49ae_a26a_293a24caa4a7.slice/crio-3c9d0996436b075e42f316198c67e1d2d0b824e8627ab0ed2318516657b50557 WatchSource:0}: Error finding container 3c9d0996436b075e42f316198c67e1d2d0b824e8627ab0ed2318516657b50557: Status 404 returned error can't find the container with id 3c9d0996436b075e42f316198c67e1d2d0b824e8627ab0ed2318516657b50557 Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.380034 4861 generic.go:334] "Generic (PLEG): container finished" podID="aa683ceb-d0ee-49ae-a26a-293a24caa4a7" containerID="88d7111fe21c3687b4fb01aa926a610afbd0c54ea7680dec3a92d01c614abed4" exitCode=0 Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.380162 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xwhv" event={"ID":"aa683ceb-d0ee-49ae-a26a-293a24caa4a7","Type":"ContainerDied","Data":"88d7111fe21c3687b4fb01aa926a610afbd0c54ea7680dec3a92d01c614abed4"} Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.380699 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xwhv" event={"ID":"aa683ceb-d0ee-49ae-a26a-293a24caa4a7","Type":"ContainerStarted","Data":"3c9d0996436b075e42f316198c67e1d2d0b824e8627ab0ed2318516657b50557"} Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.914366 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8q5tv"] Feb 19 13:29:24 crc kubenswrapper[4861]: E0219 13:29:24.915225 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" containerName="mariadb-account-create-update" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.915258 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" containerName="mariadb-account-create-update" Feb 19 13:29:24 crc kubenswrapper[4861]: E0219 13:29:24.915284 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49adc4e-23bf-4d7c-9c65-6ad99ee77551" containerName="mariadb-database-create" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.915294 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49adc4e-23bf-4d7c-9c65-6ad99ee77551" containerName="mariadb-database-create" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.915584 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" containerName="mariadb-account-create-update" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.915619 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49adc4e-23bf-4d7c-9c65-6ad99ee77551" containerName="mariadb-database-create" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.916549 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.919499 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.921616 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fss9s" Feb 19 13:29:24 crc kubenswrapper[4861]: I0219 13:29:24.932226 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8q5tv"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.009943 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpkbv\" (UniqueName: \"kubernetes.io/projected/fdd07d83-8801-49de-a338-879cea293629-kube-api-access-jpkbv\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.010073 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-config-data\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.010288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-db-sync-config-data\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.010614 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-combined-ca-bundle\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.112307 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpkbv\" (UniqueName: \"kubernetes.io/projected/fdd07d83-8801-49de-a338-879cea293629-kube-api-access-jpkbv\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.112382 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-config-data\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.112505 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-db-sync-config-data\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.112558 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-combined-ca-bundle\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.121997 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-config-data\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.122778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-combined-ca-bundle\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.132753 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-db-sync-config-data\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.139661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpkbv\" (UniqueName: \"kubernetes.io/projected/fdd07d83-8801-49de-a338-879cea293629-kube-api-access-jpkbv\") pod \"glance-db-sync-8q5tv\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.235002 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.426403 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zhbfq"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.473092 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zhbfq"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.473244 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.521365 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ba0af6-56e4-4dbd-850d-d019908adf08-operator-scripts\") pod \"keystone-db-create-zhbfq\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.521539 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddnnc\" (UniqueName: \"kubernetes.io/projected/57ba0af6-56e4-4dbd-850d-d019908adf08-kube-api-access-ddnnc\") pod \"keystone-db-create-zhbfq\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.537909 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-39dc-account-create-update-vhqdr"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.539588 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.543944 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.557774 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-39dc-account-create-update-vhqdr"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.623031 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ba0af6-56e4-4dbd-850d-d019908adf08-operator-scripts\") pod \"keystone-db-create-zhbfq\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.623321 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5cf0370-783d-41f1-9b31-259d7725b892-operator-scripts\") pod \"keystone-39dc-account-create-update-vhqdr\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.623488 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddnnc\" (UniqueName: \"kubernetes.io/projected/57ba0af6-56e4-4dbd-850d-d019908adf08-kube-api-access-ddnnc\") pod \"keystone-db-create-zhbfq\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.623911 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ba0af6-56e4-4dbd-850d-d019908adf08-operator-scripts\") pod \"keystone-db-create-zhbfq\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.623928 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh25j\" (UniqueName: \"kubernetes.io/projected/f5cf0370-783d-41f1-9b31-259d7725b892-kube-api-access-hh25j\") pod \"keystone-39dc-account-create-update-vhqdr\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.649661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddnnc\" (UniqueName: \"kubernetes.io/projected/57ba0af6-56e4-4dbd-850d-d019908adf08-kube-api-access-ddnnc\") pod \"keystone-db-create-zhbfq\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.726231 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh25j\" (UniqueName: \"kubernetes.io/projected/f5cf0370-783d-41f1-9b31-259d7725b892-kube-api-access-hh25j\") pod \"keystone-39dc-account-create-update-vhqdr\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.726927 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5cf0370-783d-41f1-9b31-259d7725b892-operator-scripts\") pod \"keystone-39dc-account-create-update-vhqdr\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.729153 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5cf0370-783d-41f1-9b31-259d7725b892-operator-scripts\") pod \"keystone-39dc-account-create-update-vhqdr\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.744883 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k5lwj"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.746029 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.759770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh25j\" (UniqueName: \"kubernetes.io/projected/f5cf0370-783d-41f1-9b31-259d7725b892-kube-api-access-hh25j\") pod \"keystone-39dc-account-create-update-vhqdr\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.768756 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k5lwj"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.824195 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.829058 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.833450 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdjf4\" (UniqueName: \"kubernetes.io/projected/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-kube-api-access-fdjf4\") pod \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.833806 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-operator-scripts\") pod \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\" (UID: \"aa683ceb-d0ee-49ae-a26a-293a24caa4a7\") " Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.834204 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee09866-fa7a-4558-af5c-992b2ae7268c-operator-scripts\") pod \"placement-db-create-k5lwj\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.834284 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzwx8\" (UniqueName: \"kubernetes.io/projected/aee09866-fa7a-4558-af5c-992b2ae7268c-kube-api-access-xzwx8\") pod \"placement-db-create-k5lwj\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.836189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa683ceb-d0ee-49ae-a26a-293a24caa4a7" (UID: "aa683ceb-d0ee-49ae-a26a-293a24caa4a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.842705 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-kube-api-access-fdjf4" (OuterVolumeSpecName: "kube-api-access-fdjf4") pod "aa683ceb-d0ee-49ae-a26a-293a24caa4a7" (UID: "aa683ceb-d0ee-49ae-a26a-293a24caa4a7"). InnerVolumeSpecName "kube-api-access-fdjf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.862045 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.863819 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b86e-account-create-update-24h55"] Feb 19 13:29:25 crc kubenswrapper[4861]: E0219 13:29:25.864305 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa683ceb-d0ee-49ae-a26a-293a24caa4a7" containerName="mariadb-account-create-update" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.864324 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa683ceb-d0ee-49ae-a26a-293a24caa4a7" containerName="mariadb-account-create-update" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.864560 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa683ceb-d0ee-49ae-a26a-293a24caa4a7" containerName="mariadb-account-create-update" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.865180 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:25 crc kubenswrapper[4861]: W0219 13:29:25.867079 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd07d83_8801_49de_a338_879cea293629.slice/crio-868dd7a6f7a4fec95d3d771d7155bfa9d7cd19cff859e80489fc1fc65a5c7e6b WatchSource:0}: Error finding container 868dd7a6f7a4fec95d3d771d7155bfa9d7cd19cff859e80489fc1fc65a5c7e6b: Status 404 returned error can't find the container with id 868dd7a6f7a4fec95d3d771d7155bfa9d7cd19cff859e80489fc1fc65a5c7e6b Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.870716 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.881806 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8q5tv"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.896180 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b86e-account-create-update-24h55"] Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938569 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9124e9-fceb-43bc-8199-73428ac7e733-operator-scripts\") pod \"placement-b86e-account-create-update-24h55\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938642 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938664 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee09866-fa7a-4558-af5c-992b2ae7268c-operator-scripts\") pod \"placement-db-create-k5lwj\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938706 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzwx8\" (UniqueName: \"kubernetes.io/projected/aee09866-fa7a-4558-af5c-992b2ae7268c-kube-api-access-xzwx8\") pod \"placement-db-create-k5lwj\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938746 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfk2q\" (UniqueName: \"kubernetes.io/projected/7d9124e9-fceb-43bc-8199-73428ac7e733-kube-api-access-sfk2q\") pod \"placement-b86e-account-create-update-24h55\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938824 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdjf4\" (UniqueName: \"kubernetes.io/projected/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-kube-api-access-fdjf4\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.938835 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa683ceb-d0ee-49ae-a26a-293a24caa4a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:25 crc kubenswrapper[4861]: E0219 13:29:25.938946 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:29:25 crc kubenswrapper[4861]: E0219 13:29:25.938957 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:29:25 crc kubenswrapper[4861]: E0219 13:29:25.939002 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift podName:f7c9197d-43d5-4c72-a7c3-c2e435368dd2 nodeName:}" failed. No retries permitted until 2026-02-19 13:29:33.938979821 +0000 UTC m=+1188.600083049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift") pod "swift-storage-0" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2") : configmap "swift-ring-files" not found Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.940292 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee09866-fa7a-4558-af5c-992b2ae7268c-operator-scripts\") pod \"placement-db-create-k5lwj\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:25 crc kubenswrapper[4861]: I0219 13:29:25.967084 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzwx8\" (UniqueName: \"kubernetes.io/projected/aee09866-fa7a-4558-af5c-992b2ae7268c-kube-api-access-xzwx8\") pod \"placement-db-create-k5lwj\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.040043 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9124e9-fceb-43bc-8199-73428ac7e733-operator-scripts\") pod \"placement-b86e-account-create-update-24h55\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.040138 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfk2q\" (UniqueName: \"kubernetes.io/projected/7d9124e9-fceb-43bc-8199-73428ac7e733-kube-api-access-sfk2q\") pod \"placement-b86e-account-create-update-24h55\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.041262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9124e9-fceb-43bc-8199-73428ac7e733-operator-scripts\") pod \"placement-b86e-account-create-update-24h55\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.061285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfk2q\" (UniqueName: \"kubernetes.io/projected/7d9124e9-fceb-43bc-8199-73428ac7e733-kube-api-access-sfk2q\") pod \"placement-b86e-account-create-update-24h55\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.073032 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.196093 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.348842 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zhbfq"] Feb 19 13:29:26 crc kubenswrapper[4861]: W0219 13:29:26.365070 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57ba0af6_56e4_4dbd_850d_d019908adf08.slice/crio-c16f776f609bf143ec83a7b108b2e7c080511f6ddccbbaaa708c37b3ddd7166c WatchSource:0}: Error finding container c16f776f609bf143ec83a7b108b2e7c080511f6ddccbbaaa708c37b3ddd7166c: Status 404 returned error can't find the container with id c16f776f609bf143ec83a7b108b2e7c080511f6ddccbbaaa708c37b3ddd7166c Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.401411 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xwhv" event={"ID":"aa683ceb-d0ee-49ae-a26a-293a24caa4a7","Type":"ContainerDied","Data":"3c9d0996436b075e42f316198c67e1d2d0b824e8627ab0ed2318516657b50557"} Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.401522 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c9d0996436b075e42f316198c67e1d2d0b824e8627ab0ed2318516657b50557" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.401689 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xwhv" Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.404304 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8q5tv" event={"ID":"fdd07d83-8801-49de-a338-879cea293629","Type":"ContainerStarted","Data":"868dd7a6f7a4fec95d3d771d7155bfa9d7cd19cff859e80489fc1fc65a5c7e6b"} Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.405670 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhbfq" event={"ID":"57ba0af6-56e4-4dbd-850d-d019908adf08","Type":"ContainerStarted","Data":"c16f776f609bf143ec83a7b108b2e7c080511f6ddccbbaaa708c37b3ddd7166c"} Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.461691 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-39dc-account-create-update-vhqdr"] Feb 19 13:29:26 crc kubenswrapper[4861]: W0219 13:29:26.464775 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5cf0370_783d_41f1_9b31_259d7725b892.slice/crio-298512fa8be29ce28eb20d9e87df9a6032f1ab968fbdbfa331c27387e3afd026 WatchSource:0}: Error finding container 298512fa8be29ce28eb20d9e87df9a6032f1ab968fbdbfa331c27387e3afd026: Status 404 returned error can't find the container with id 298512fa8be29ce28eb20d9e87df9a6032f1ab968fbdbfa331c27387e3afd026 Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.648178 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k5lwj"] Feb 19 13:29:26 crc kubenswrapper[4861]: I0219 13:29:26.792321 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b86e-account-create-update-24h55"] Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.369601 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.417267 4861 generic.go:334] "Generic (PLEG): container finished" podID="aee09866-fa7a-4558-af5c-992b2ae7268c" containerID="15062df0f332fc34ec11ca8c581d2e2d781dd074f3e93bfb91759c5f2c17cbb8" exitCode=0 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.417352 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5lwj" event={"ID":"aee09866-fa7a-4558-af5c-992b2ae7268c","Type":"ContainerDied","Data":"15062df0f332fc34ec11ca8c581d2e2d781dd074f3e93bfb91759c5f2c17cbb8"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.417397 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5lwj" event={"ID":"aee09866-fa7a-4558-af5c-992b2ae7268c","Type":"ContainerStarted","Data":"a1d1afce703fa68bbec7fe8ba25cfa28abf591f7375b375c0d37e35aab24bba4"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.419081 4861 generic.go:334] "Generic (PLEG): container finished" podID="fe64a04b-1266-4b02-88e5-191f4a974422" containerID="3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba" exitCode=0 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.419134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe64a04b-1266-4b02-88e5-191f4a974422","Type":"ContainerDied","Data":"3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.426221 4861 generic.go:334] "Generic (PLEG): container finished" podID="57ba0af6-56e4-4dbd-850d-d019908adf08" containerID="24ee240c37ffbbe11eb45f46ef225bda8ab9c0ff100538b0496518fae46d4be9" exitCode=0 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.426311 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhbfq" event={"ID":"57ba0af6-56e4-4dbd-850d-d019908adf08","Type":"ContainerDied","Data":"24ee240c37ffbbe11eb45f46ef225bda8ab9c0ff100538b0496518fae46d4be9"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.442062 4861 generic.go:334] "Generic (PLEG): container finished" podID="7d9124e9-fceb-43bc-8199-73428ac7e733" containerID="fe0ee9914a42fe0554e85923d4a0281bffd6e0ce646068958a11a4c9324dbe38" exitCode=0 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.442146 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b86e-account-create-update-24h55" event={"ID":"7d9124e9-fceb-43bc-8199-73428ac7e733","Type":"ContainerDied","Data":"fe0ee9914a42fe0554e85923d4a0281bffd6e0ce646068958a11a4c9324dbe38"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.442222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b86e-account-create-update-24h55" event={"ID":"7d9124e9-fceb-43bc-8199-73428ac7e733","Type":"ContainerStarted","Data":"2da7f780a1cfc99cb1b77abe44fc1213cbe56ccb756dae9007ee2721f0aa3442"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.447391 4861 generic.go:334] "Generic (PLEG): container finished" podID="f5cf0370-783d-41f1-9b31-259d7725b892" containerID="e16fc260e093b17e520243b93e5fe14feac73536fef57e9588a174ac261332d7" exitCode=0 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.447481 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-39dc-account-create-update-vhqdr" event={"ID":"f5cf0370-783d-41f1-9b31-259d7725b892","Type":"ContainerDied","Data":"e16fc260e093b17e520243b93e5fe14feac73536fef57e9588a174ac261332d7"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.447510 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-39dc-account-create-update-vhqdr" event={"ID":"f5cf0370-783d-41f1-9b31-259d7725b892","Type":"ContainerStarted","Data":"298512fa8be29ce28eb20d9e87df9a6032f1ab968fbdbfa331c27387e3afd026"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.449775 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-rwzx6"] Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.450244 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="dnsmasq-dns" containerID="cri-o://1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e" gracePeriod=10 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.451909 4861 generic.go:334] "Generic (PLEG): container finished" podID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerID="f11100b3d10e0ed10dbc1ccc95f8c840822253bd67ae1be0a2829b7c7e5404fc" exitCode=0 Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.451941 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b117524a-eaad-4666-9e0e-bda909b2ad30","Type":"ContainerDied","Data":"f11100b3d10e0ed10dbc1ccc95f8c840822253bd67ae1be0a2829b7c7e5404fc"} Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.697213 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8xwhv"] Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.704042 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8xwhv"] Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.762326 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.963683 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:27 crc kubenswrapper[4861]: I0219 13:29:27.988883 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa683ceb-d0ee-49ae-a26a-293a24caa4a7" path="/var/lib/kubelet/pods/aa683ceb-d0ee-49ae-a26a-293a24caa4a7/volumes" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.099700 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-nb\") pod \"7b3203a6-ea06-471a-92b8-93634f2099c0\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.099784 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnv59\" (UniqueName: \"kubernetes.io/projected/7b3203a6-ea06-471a-92b8-93634f2099c0-kube-api-access-pnv59\") pod \"7b3203a6-ea06-471a-92b8-93634f2099c0\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.100071 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-sb\") pod \"7b3203a6-ea06-471a-92b8-93634f2099c0\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.100107 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-dns-svc\") pod \"7b3203a6-ea06-471a-92b8-93634f2099c0\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.100140 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-config\") pod \"7b3203a6-ea06-471a-92b8-93634f2099c0\" (UID: \"7b3203a6-ea06-471a-92b8-93634f2099c0\") " Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.107375 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3203a6-ea06-471a-92b8-93634f2099c0-kube-api-access-pnv59" (OuterVolumeSpecName: "kube-api-access-pnv59") pod "7b3203a6-ea06-471a-92b8-93634f2099c0" (UID: "7b3203a6-ea06-471a-92b8-93634f2099c0"). InnerVolumeSpecName "kube-api-access-pnv59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.151221 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b3203a6-ea06-471a-92b8-93634f2099c0" (UID: "7b3203a6-ea06-471a-92b8-93634f2099c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.151555 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b3203a6-ea06-471a-92b8-93634f2099c0" (UID: "7b3203a6-ea06-471a-92b8-93634f2099c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.163248 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b3203a6-ea06-471a-92b8-93634f2099c0" (UID: "7b3203a6-ea06-471a-92b8-93634f2099c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.185205 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-config" (OuterVolumeSpecName: "config") pod "7b3203a6-ea06-471a-92b8-93634f2099c0" (UID: "7b3203a6-ea06-471a-92b8-93634f2099c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.205028 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.205061 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.205070 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.205095 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3203a6-ea06-471a-92b8-93634f2099c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.205109 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnv59\" (UniqueName: \"kubernetes.io/projected/7b3203a6-ea06-471a-92b8-93634f2099c0-kube-api-access-pnv59\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.466531 4861 generic.go:334] "Generic (PLEG): container finished" podID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerID="1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e" exitCode=0 Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.466618 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.466628 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" event={"ID":"7b3203a6-ea06-471a-92b8-93634f2099c0","Type":"ContainerDied","Data":"1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e"} Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.467085 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-rwzx6" event={"ID":"7b3203a6-ea06-471a-92b8-93634f2099c0","Type":"ContainerDied","Data":"74fc72627de99ca68f6fea6ae80239987af542480b2d9a27dfca45de48356814"} Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.467114 4861 scope.go:117] "RemoveContainer" containerID="1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.469479 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b117524a-eaad-4666-9e0e-bda909b2ad30","Type":"ContainerStarted","Data":"a34aea6a9dce7447619085b8bdcc194d614605d384336f31f474bb36345d67a2"} Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.470537 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.475617 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe64a04b-1266-4b02-88e5-191f4a974422","Type":"ContainerStarted","Data":"d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81"} Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.475988 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.490407 4861 scope.go:117] "RemoveContainer" containerID="e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.510901 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.996863776 podStartE2EDuration="59.510868092s" podCreationTimestamp="2026-02-19 13:28:29 +0000 UTC" firstStartedPulling="2026-02-19 13:28:32.31660526 +0000 UTC m=+1126.977708488" lastFinishedPulling="2026-02-19 13:28:53.830609566 +0000 UTC m=+1148.491712804" observedRunningTime="2026-02-19 13:29:28.504194972 +0000 UTC m=+1183.165298200" watchObservedRunningTime="2026-02-19 13:29:28.510868092 +0000 UTC m=+1183.171971320" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.534563 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-rwzx6"] Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.542165 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-rwzx6"] Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.543888 4861 scope.go:117] "RemoveContainer" containerID="1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e" Feb 19 13:29:28 crc kubenswrapper[4861]: E0219 13:29:28.545813 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e\": container with ID starting with 1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e not found: ID does not exist" containerID="1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.545989 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e"} err="failed to get container status \"1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e\": rpc error: code = NotFound desc = could not find container \"1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e\": container with ID starting with 1ee4981da9f89cd6bb8babcda7e9e5ae7a8c0047931fcc84922e05a0980e4a6e not found: ID does not exist" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.546103 4861 scope.go:117] "RemoveContainer" containerID="e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab" Feb 19 13:29:28 crc kubenswrapper[4861]: E0219 13:29:28.546579 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab\": container with ID starting with e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab not found: ID does not exist" containerID="e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.546680 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab"} err="failed to get container status \"e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab\": rpc error: code = NotFound desc = could not find container \"e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab\": container with ID starting with e433117d2b8b229b3bef18df970a907fecce077c086abda19efa4d6e416772ab not found: ID does not exist" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.570541 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.964133289 podStartE2EDuration="58.570517538s" podCreationTimestamp="2026-02-19 13:28:30 +0000 UTC" firstStartedPulling="2026-02-19 13:28:32.245899291 +0000 UTC m=+1126.907002519" lastFinishedPulling="2026-02-19 13:28:53.85228352 +0000 UTC m=+1148.513386768" observedRunningTime="2026-02-19 13:29:28.558800673 +0000 UTC m=+1183.219903901" watchObservedRunningTime="2026-02-19 13:29:28.570517538 +0000 UTC m=+1183.231620766" Feb 19 13:29:28 crc kubenswrapper[4861]: I0219 13:29:28.920407 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.045189 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfk2q\" (UniqueName: \"kubernetes.io/projected/7d9124e9-fceb-43bc-8199-73428ac7e733-kube-api-access-sfk2q\") pod \"7d9124e9-fceb-43bc-8199-73428ac7e733\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.045301 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9124e9-fceb-43bc-8199-73428ac7e733-operator-scripts\") pod \"7d9124e9-fceb-43bc-8199-73428ac7e733\" (UID: \"7d9124e9-fceb-43bc-8199-73428ac7e733\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.046517 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9124e9-fceb-43bc-8199-73428ac7e733-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d9124e9-fceb-43bc-8199-73428ac7e733" (UID: "7d9124e9-fceb-43bc-8199-73428ac7e733"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.052861 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9124e9-fceb-43bc-8199-73428ac7e733-kube-api-access-sfk2q" (OuterVolumeSpecName: "kube-api-access-sfk2q") pod "7d9124e9-fceb-43bc-8199-73428ac7e733" (UID: "7d9124e9-fceb-43bc-8199-73428ac7e733"). InnerVolumeSpecName "kube-api-access-sfk2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.147619 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfk2q\" (UniqueName: \"kubernetes.io/projected/7d9124e9-fceb-43bc-8199-73428ac7e733-kube-api-access-sfk2q\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.147683 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9124e9-fceb-43bc-8199-73428ac7e733-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.286810 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.288681 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.381020 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.427063 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.453837 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ba0af6-56e4-4dbd-850d-d019908adf08-operator-scripts\") pod \"57ba0af6-56e4-4dbd-850d-d019908adf08\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.453971 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee09866-fa7a-4558-af5c-992b2ae7268c-operator-scripts\") pod \"aee09866-fa7a-4558-af5c-992b2ae7268c\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.454179 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddnnc\" (UniqueName: \"kubernetes.io/projected/57ba0af6-56e4-4dbd-850d-d019908adf08-kube-api-access-ddnnc\") pod \"57ba0af6-56e4-4dbd-850d-d019908adf08\" (UID: \"57ba0af6-56e4-4dbd-850d-d019908adf08\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.454229 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzwx8\" (UniqueName: \"kubernetes.io/projected/aee09866-fa7a-4558-af5c-992b2ae7268c-kube-api-access-xzwx8\") pod \"aee09866-fa7a-4558-af5c-992b2ae7268c\" (UID: \"aee09866-fa7a-4558-af5c-992b2ae7268c\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.454475 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ba0af6-56e4-4dbd-850d-d019908adf08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57ba0af6-56e4-4dbd-850d-d019908adf08" (UID: "57ba0af6-56e4-4dbd-850d-d019908adf08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.454941 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ba0af6-56e4-4dbd-850d-d019908adf08-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.454964 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee09866-fa7a-4558-af5c-992b2ae7268c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aee09866-fa7a-4558-af5c-992b2ae7268c" (UID: "aee09866-fa7a-4558-af5c-992b2ae7268c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.458213 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee09866-fa7a-4558-af5c-992b2ae7268c-kube-api-access-xzwx8" (OuterVolumeSpecName: "kube-api-access-xzwx8") pod "aee09866-fa7a-4558-af5c-992b2ae7268c" (UID: "aee09866-fa7a-4558-af5c-992b2ae7268c"). InnerVolumeSpecName "kube-api-access-xzwx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.459604 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ba0af6-56e4-4dbd-850d-d019908adf08-kube-api-access-ddnnc" (OuterVolumeSpecName: "kube-api-access-ddnnc") pod "57ba0af6-56e4-4dbd-850d-d019908adf08" (UID: "57ba0af6-56e4-4dbd-850d-d019908adf08"). InnerVolumeSpecName "kube-api-access-ddnnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.491868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5lwj" event={"ID":"aee09866-fa7a-4558-af5c-992b2ae7268c","Type":"ContainerDied","Data":"a1d1afce703fa68bbec7fe8ba25cfa28abf591f7375b375c0d37e35aab24bba4"} Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.491917 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5lwj" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.491934 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d1afce703fa68bbec7fe8ba25cfa28abf591f7375b375c0d37e35aab24bba4" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.498317 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhbfq" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.500499 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhbfq" event={"ID":"57ba0af6-56e4-4dbd-850d-d019908adf08","Type":"ContainerDied","Data":"c16f776f609bf143ec83a7b108b2e7c080511f6ddccbbaaa708c37b3ddd7166c"} Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.500538 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16f776f609bf143ec83a7b108b2e7c080511f6ddccbbaaa708c37b3ddd7166c" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.509837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b86e-account-create-update-24h55" event={"ID":"7d9124e9-fceb-43bc-8199-73428ac7e733","Type":"ContainerDied","Data":"2da7f780a1cfc99cb1b77abe44fc1213cbe56ccb756dae9007ee2721f0aa3442"} Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.510316 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da7f780a1cfc99cb1b77abe44fc1213cbe56ccb756dae9007ee2721f0aa3442" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.510265 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-24h55" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.512827 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-39dc-account-create-update-vhqdr" event={"ID":"f5cf0370-783d-41f1-9b31-259d7725b892","Type":"ContainerDied","Data":"298512fa8be29ce28eb20d9e87df9a6032f1ab968fbdbfa331c27387e3afd026"} Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.512865 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="298512fa8be29ce28eb20d9e87df9a6032f1ab968fbdbfa331c27387e3afd026" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.512882 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-vhqdr" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.556911 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh25j\" (UniqueName: \"kubernetes.io/projected/f5cf0370-783d-41f1-9b31-259d7725b892-kube-api-access-hh25j\") pod \"f5cf0370-783d-41f1-9b31-259d7725b892\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.557163 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5cf0370-783d-41f1-9b31-259d7725b892-operator-scripts\") pod \"f5cf0370-783d-41f1-9b31-259d7725b892\" (UID: \"f5cf0370-783d-41f1-9b31-259d7725b892\") " Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.557825 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee09866-fa7a-4558-af5c-992b2ae7268c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.557858 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddnnc\" (UniqueName: \"kubernetes.io/projected/57ba0af6-56e4-4dbd-850d-d019908adf08-kube-api-access-ddnnc\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.557876 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzwx8\" (UniqueName: \"kubernetes.io/projected/aee09866-fa7a-4558-af5c-992b2ae7268c-kube-api-access-xzwx8\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.558048 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cf0370-783d-41f1-9b31-259d7725b892-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5cf0370-783d-41f1-9b31-259d7725b892" (UID: "f5cf0370-783d-41f1-9b31-259d7725b892"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.566274 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cf0370-783d-41f1-9b31-259d7725b892-kube-api-access-hh25j" (OuterVolumeSpecName: "kube-api-access-hh25j") pod "f5cf0370-783d-41f1-9b31-259d7725b892" (UID: "f5cf0370-783d-41f1-9b31-259d7725b892"). InnerVolumeSpecName "kube-api-access-hh25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.660600 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh25j\" (UniqueName: \"kubernetes.io/projected/f5cf0370-783d-41f1-9b31-259d7725b892-kube-api-access-hh25j\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.660640 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5cf0370-783d-41f1-9b31-259d7725b892-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:29 crc kubenswrapper[4861]: I0219 13:29:29.991136 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" path="/var/lib/kubelet/pods/7b3203a6-ea06-471a-92b8-93634f2099c0/volumes" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.335158 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mfh4r"] Feb 19 13:29:31 crc kubenswrapper[4861]: E0219 13:29:31.336029 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="dnsmasq-dns" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336050 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="dnsmasq-dns" Feb 19 13:29:31 crc kubenswrapper[4861]: E0219 13:29:31.336073 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="init" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336080 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="init" Feb 19 13:29:31 crc kubenswrapper[4861]: E0219 13:29:31.336092 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ba0af6-56e4-4dbd-850d-d019908adf08" containerName="mariadb-database-create" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336099 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ba0af6-56e4-4dbd-850d-d019908adf08" containerName="mariadb-database-create" Feb 19 13:29:31 crc kubenswrapper[4861]: E0219 13:29:31.336110 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cf0370-783d-41f1-9b31-259d7725b892" containerName="mariadb-account-create-update" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336116 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cf0370-783d-41f1-9b31-259d7725b892" containerName="mariadb-account-create-update" Feb 19 13:29:31 crc kubenswrapper[4861]: E0219 13:29:31.336132 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee09866-fa7a-4558-af5c-992b2ae7268c" containerName="mariadb-database-create" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336138 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee09866-fa7a-4558-af5c-992b2ae7268c" containerName="mariadb-database-create" Feb 19 13:29:31 crc kubenswrapper[4861]: E0219 13:29:31.336151 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9124e9-fceb-43bc-8199-73428ac7e733" containerName="mariadb-account-create-update" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336157 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9124e9-fceb-43bc-8199-73428ac7e733" containerName="mariadb-account-create-update" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336291 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5cf0370-783d-41f1-9b31-259d7725b892" containerName="mariadb-account-create-update" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336303 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9124e9-fceb-43bc-8199-73428ac7e733" containerName="mariadb-account-create-update" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336313 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ba0af6-56e4-4dbd-850d-d019908adf08" containerName="mariadb-database-create" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336326 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee09866-fa7a-4558-af5c-992b2ae7268c" containerName="mariadb-database-create" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336338 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3203a6-ea06-471a-92b8-93634f2099c0" containerName="dnsmasq-dns" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.336857 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.339515 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.343997 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mfh4r"] Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.392223 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ghv\" (UniqueName: \"kubernetes.io/projected/d09c18e7-59e1-4960-96d9-edfe82f826b3-kube-api-access-z7ghv\") pod \"root-account-create-update-mfh4r\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.392315 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09c18e7-59e1-4960-96d9-edfe82f826b3-operator-scripts\") pod \"root-account-create-update-mfh4r\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.494360 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09c18e7-59e1-4960-96d9-edfe82f826b3-operator-scripts\") pod \"root-account-create-update-mfh4r\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.494548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7ghv\" (UniqueName: \"kubernetes.io/projected/d09c18e7-59e1-4960-96d9-edfe82f826b3-kube-api-access-z7ghv\") pod \"root-account-create-update-mfh4r\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.495345 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09c18e7-59e1-4960-96d9-edfe82f826b3-operator-scripts\") pod \"root-account-create-update-mfh4r\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.551195 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7ghv\" (UniqueName: \"kubernetes.io/projected/d09c18e7-59e1-4960-96d9-edfe82f826b3-kube-api-access-z7ghv\") pod \"root-account-create-update-mfh4r\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.570923 4861 generic.go:334] "Generic (PLEG): container finished" podID="c707a7f2-3143-4979-96e4-23177b810c9e" containerID="5f4ea76f6f5a358df34a0fc0c42414fb4768cb3d35df370d01f4bb513bb89a5d" exitCode=0 Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.571010 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8c6n5" event={"ID":"c707a7f2-3143-4979-96e4-23177b810c9e","Type":"ContainerDied","Data":"5f4ea76f6f5a358df34a0fc0c42414fb4768cb3d35df370d01f4bb513bb89a5d"} Feb 19 13:29:31 crc kubenswrapper[4861]: I0219 13:29:31.655227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:32 crc kubenswrapper[4861]: I0219 13:29:32.169994 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mfh4r"] Feb 19 13:29:32 crc kubenswrapper[4861]: I0219 13:29:32.582673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mfh4r" event={"ID":"d09c18e7-59e1-4960-96d9-edfe82f826b3","Type":"ContainerStarted","Data":"4ec70b45048d50715b61c4fb177ea92e9d73fbffe8837d09a60dcadb648edd85"} Feb 19 13:29:32 crc kubenswrapper[4861]: I0219 13:29:32.583064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mfh4r" event={"ID":"d09c18e7-59e1-4960-96d9-edfe82f826b3","Type":"ContainerStarted","Data":"9fa508c33e86e35c9272f1c877c4bd35cda0ab3866975a1706c84da82553fc8c"} Feb 19 13:29:32 crc kubenswrapper[4861]: I0219 13:29:32.600895 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mfh4r" podStartSLOduration=1.600873906 podStartE2EDuration="1.600873906s" podCreationTimestamp="2026-02-19 13:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:32.600809815 +0000 UTC m=+1187.261913033" watchObservedRunningTime="2026-02-19 13:29:32.600873906 +0000 UTC m=+1187.261977134" Feb 19 13:29:33 crc kubenswrapper[4861]: I0219 13:29:33.599080 4861 generic.go:334] "Generic (PLEG): container finished" podID="d09c18e7-59e1-4960-96d9-edfe82f826b3" containerID="4ec70b45048d50715b61c4fb177ea92e9d73fbffe8837d09a60dcadb648edd85" exitCode=0 Feb 19 13:29:33 crc kubenswrapper[4861]: I0219 13:29:33.599590 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mfh4r" event={"ID":"d09c18e7-59e1-4960-96d9-edfe82f826b3","Type":"ContainerDied","Data":"4ec70b45048d50715b61c4fb177ea92e9d73fbffe8837d09a60dcadb648edd85"} Feb 19 13:29:33 crc kubenswrapper[4861]: I0219 13:29:33.942072 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:33 crc kubenswrapper[4861]: I0219 13:29:33.963534 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"swift-storage-0\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " pod="openstack/swift-storage-0" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.260617 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.439473 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q996h" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" probeResult="failure" output=< Feb 19 13:29:34 crc kubenswrapper[4861]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 13:29:34 crc kubenswrapper[4861]: > Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.440283 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.496181 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.715541 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q996h-config-5qscf"] Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.716762 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.719730 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.749790 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q996h-config-5qscf"] Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.761674 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-log-ovn\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.761795 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.761910 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtrr\" (UniqueName: \"kubernetes.io/projected/74150e86-16c7-41a4-819a-21dac1e87ef2-kube-api-access-qdtrr\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.761956 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-additional-scripts\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.762000 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-scripts\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.762047 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run-ovn\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.863688 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-additional-scripts\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.863770 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-scripts\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.863821 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run-ovn\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.863862 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-log-ovn\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.863898 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.863992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtrr\" (UniqueName: \"kubernetes.io/projected/74150e86-16c7-41a4-819a-21dac1e87ef2-kube-api-access-qdtrr\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.865199 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-additional-scripts\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.866687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.866747 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-log-ovn\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.866774 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run-ovn\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.868331 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-scripts\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:34 crc kubenswrapper[4861]: I0219 13:29:34.887199 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtrr\" (UniqueName: \"kubernetes.io/projected/74150e86-16c7-41a4-819a-21dac1e87ef2-kube-api-access-qdtrr\") pod \"ovn-controller-q996h-config-5qscf\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:35 crc kubenswrapper[4861]: I0219 13:29:35.044557 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.501204 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.512065 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.539125 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q996h" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" probeResult="failure" output=< Feb 19 13:29:39 crc kubenswrapper[4861]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 13:29:39 crc kubenswrapper[4861]: > Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.654536 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-ring-data-devices\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.654612 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-scripts\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.654636 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7ghv\" (UniqueName: \"kubernetes.io/projected/d09c18e7-59e1-4960-96d9-edfe82f826b3-kube-api-access-z7ghv\") pod \"d09c18e7-59e1-4960-96d9-edfe82f826b3\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.654668 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-combined-ca-bundle\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.655375 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09c18e7-59e1-4960-96d9-edfe82f826b3-operator-scripts\") pod \"d09c18e7-59e1-4960-96d9-edfe82f826b3\" (UID: \"d09c18e7-59e1-4960-96d9-edfe82f826b3\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.655456 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-dispersionconf\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.655552 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c707a7f2-3143-4979-96e4-23177b810c9e-etc-swift\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.655581 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-swiftconf\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.655696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht2jx\" (UniqueName: \"kubernetes.io/projected/c707a7f2-3143-4979-96e4-23177b810c9e-kube-api-access-ht2jx\") pod \"c707a7f2-3143-4979-96e4-23177b810c9e\" (UID: \"c707a7f2-3143-4979-96e4-23177b810c9e\") " Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.655962 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.656108 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09c18e7-59e1-4960-96d9-edfe82f826b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d09c18e7-59e1-4960-96d9-edfe82f826b3" (UID: "d09c18e7-59e1-4960-96d9-edfe82f826b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.656397 4861 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.656417 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09c18e7-59e1-4960-96d9-edfe82f826b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.656493 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c707a7f2-3143-4979-96e4-23177b810c9e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.659412 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09c18e7-59e1-4960-96d9-edfe82f826b3-kube-api-access-z7ghv" (OuterVolumeSpecName: "kube-api-access-z7ghv") pod "d09c18e7-59e1-4960-96d9-edfe82f826b3" (UID: "d09c18e7-59e1-4960-96d9-edfe82f826b3"). InnerVolumeSpecName "kube-api-access-z7ghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.660304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c707a7f2-3143-4979-96e4-23177b810c9e-kube-api-access-ht2jx" (OuterVolumeSpecName: "kube-api-access-ht2jx") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "kube-api-access-ht2jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.665802 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.669129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8c6n5" event={"ID":"c707a7f2-3143-4979-96e4-23177b810c9e","Type":"ContainerDied","Data":"fee9e6aa6eecc92d2c371c9644cd5ef5b7081d75d925fb26f19e5f51dee22c09"} Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.669151 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8c6n5" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.669174 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fee9e6aa6eecc92d2c371c9644cd5ef5b7081d75d925fb26f19e5f51dee22c09" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.670738 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mfh4r" event={"ID":"d09c18e7-59e1-4960-96d9-edfe82f826b3","Type":"ContainerDied","Data":"9fa508c33e86e35c9272f1c877c4bd35cda0ab3866975a1706c84da82553fc8c"} Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.670774 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fa508c33e86e35c9272f1c877c4bd35cda0ab3866975a1706c84da82553fc8c" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.670825 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mfh4r" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.679512 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-scripts" (OuterVolumeSpecName: "scripts") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.682604 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.687952 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c707a7f2-3143-4979-96e4-23177b810c9e" (UID: "c707a7f2-3143-4979-96e4-23177b810c9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757710 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c707a7f2-3143-4979-96e4-23177b810c9e-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757747 4861 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757757 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht2jx\" (UniqueName: \"kubernetes.io/projected/c707a7f2-3143-4979-96e4-23177b810c9e-kube-api-access-ht2jx\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757810 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c707a7f2-3143-4979-96e4-23177b810c9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757819 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7ghv\" (UniqueName: \"kubernetes.io/projected/d09c18e7-59e1-4960-96d9-edfe82f826b3-kube-api-access-z7ghv\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757827 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.757835 4861 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c707a7f2-3143-4979-96e4-23177b810c9e-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:39 crc kubenswrapper[4861]: I0219 13:29:39.854585 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q996h-config-5qscf"] Feb 19 13:29:39 crc kubenswrapper[4861]: W0219 13:29:39.858762 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74150e86_16c7_41a4_819a_21dac1e87ef2.slice/crio-4c5b85c85dfda9c8751b6000be27362c3463c2925e33f695c9c6a4ebbc701fc0 WatchSource:0}: Error finding container 4c5b85c85dfda9c8751b6000be27362c3463c2925e33f695c9c6a4ebbc701fc0: Status 404 returned error can't find the container with id 4c5b85c85dfda9c8751b6000be27362c3463c2925e33f695c9c6a4ebbc701fc0 Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.044237 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.679875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8q5tv" event={"ID":"fdd07d83-8801-49de-a338-879cea293629","Type":"ContainerStarted","Data":"03c7958adbcf935e7fed370774cad0fe23d4461b437a22f750d78b8a60d51ddc"} Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.682578 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"7c52d99b6dd419fa019b2df2b1af0034144f70d00885bb24457b4eb0e9a8ca04"} Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.688061 4861 generic.go:334] "Generic (PLEG): container finished" podID="74150e86-16c7-41a4-819a-21dac1e87ef2" containerID="cb68315335170486bebf59381c3d2b0420807df6d883092496cf318b7f08bed0" exitCode=0 Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.688102 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h-config-5qscf" event={"ID":"74150e86-16c7-41a4-819a-21dac1e87ef2","Type":"ContainerDied","Data":"cb68315335170486bebf59381c3d2b0420807df6d883092496cf318b7f08bed0"} Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.688124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h-config-5qscf" event={"ID":"74150e86-16c7-41a4-819a-21dac1e87ef2","Type":"ContainerStarted","Data":"4c5b85c85dfda9c8751b6000be27362c3463c2925e33f695c9c6a4ebbc701fc0"} Feb 19 13:29:40 crc kubenswrapper[4861]: I0219 13:29:40.705884 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8q5tv" podStartSLOduration=3.171538954 podStartE2EDuration="16.705866484s" podCreationTimestamp="2026-02-19 13:29:24 +0000 UTC" firstStartedPulling="2026-02-19 13:29:25.881326769 +0000 UTC m=+1180.542429997" lastFinishedPulling="2026-02-19 13:29:39.415654299 +0000 UTC m=+1194.076757527" observedRunningTime="2026-02-19 13:29:40.7001466 +0000 UTC m=+1195.361249828" watchObservedRunningTime="2026-02-19 13:29:40.705866484 +0000 UTC m=+1195.366969712" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.610816 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.651000 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.741142 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"39d17545ee3cebeec079609c07c099110ffc65d4ddb3e462d92e98a8b967e616"} Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.741958 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"74434874608f684b61cedbd97fbdb2a90894f4bed2db7e66ba332e8b322c05b2"} Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.741973 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"2923944d19b38687e829c0ad91d45d3fa58c574c1be4562c0a63aa47c65877b8"} Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.948498 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4qhvt"] Feb 19 13:29:41 crc kubenswrapper[4861]: E0219 13:29:41.948842 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c707a7f2-3143-4979-96e4-23177b810c9e" containerName="swift-ring-rebalance" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.948858 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c707a7f2-3143-4979-96e4-23177b810c9e" containerName="swift-ring-rebalance" Feb 19 13:29:41 crc kubenswrapper[4861]: E0219 13:29:41.948874 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09c18e7-59e1-4960-96d9-edfe82f826b3" containerName="mariadb-account-create-update" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.948880 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09c18e7-59e1-4960-96d9-edfe82f826b3" containerName="mariadb-account-create-update" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.949019 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09c18e7-59e1-4960-96d9-edfe82f826b3" containerName="mariadb-account-create-update" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.949060 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c707a7f2-3143-4979-96e4-23177b810c9e" containerName="swift-ring-rebalance" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.951497 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.994560 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmbn\" (UniqueName: \"kubernetes.io/projected/9a0a1119-88df-47db-9b04-87d908da605d-kube-api-access-5cmbn\") pod \"cinder-db-create-4qhvt\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:41 crc kubenswrapper[4861]: I0219 13:29:41.994682 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0a1119-88df-47db-9b04-87d908da605d-operator-scripts\") pod \"cinder-db-create-4qhvt\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.023389 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4qhvt"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.097230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cmbn\" (UniqueName: \"kubernetes.io/projected/9a0a1119-88df-47db-9b04-87d908da605d-kube-api-access-5cmbn\") pod \"cinder-db-create-4qhvt\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.098256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0a1119-88df-47db-9b04-87d908da605d-operator-scripts\") pod \"cinder-db-create-4qhvt\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.102885 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0a1119-88df-47db-9b04-87d908da605d-operator-scripts\") pod \"cinder-db-create-4qhvt\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.120847 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cmbn\" (UniqueName: \"kubernetes.io/projected/9a0a1119-88df-47db-9b04-87d908da605d-kube-api-access-5cmbn\") pod \"cinder-db-create-4qhvt\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.178997 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5b8e-account-create-update-2jw4d"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.180558 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.183112 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.187549 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b8e-account-create-update-2jw4d"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.239148 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.263238 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mphtc"] Feb 19 13:29:42 crc kubenswrapper[4861]: E0219 13:29:42.263618 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74150e86-16c7-41a4-819a-21dac1e87ef2" containerName="ovn-config" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.263634 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74150e86-16c7-41a4-819a-21dac1e87ef2" containerName="ovn-config" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.263790 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="74150e86-16c7-41a4-819a-21dac1e87ef2" containerName="ovn-config" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.264264 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.270766 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mphtc"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.275381 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302545 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run-ovn\") pod \"74150e86-16c7-41a4-819a-21dac1e87ef2\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302614 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-scripts\") pod \"74150e86-16c7-41a4-819a-21dac1e87ef2\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302682 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run\") pod \"74150e86-16c7-41a4-819a-21dac1e87ef2\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302769 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-log-ovn\") pod \"74150e86-16c7-41a4-819a-21dac1e87ef2\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302791 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtrr\" (UniqueName: \"kubernetes.io/projected/74150e86-16c7-41a4-819a-21dac1e87ef2-kube-api-access-qdtrr\") pod \"74150e86-16c7-41a4-819a-21dac1e87ef2\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302815 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-additional-scripts\") pod \"74150e86-16c7-41a4-819a-21dac1e87ef2\" (UID: \"74150e86-16c7-41a4-819a-21dac1e87ef2\") " Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303031 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d43591-2a09-426b-b8e3-9f21dd334f70-operator-scripts\") pod \"barbican-db-create-mphtc\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303153 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-operator-scripts\") pod \"cinder-5b8e-account-create-update-2jw4d\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303179 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqjq\" (UniqueName: \"kubernetes.io/projected/a4d43591-2a09-426b-b8e3-9f21dd334f70-kube-api-access-hkqjq\") pod \"barbican-db-create-mphtc\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303212 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5sx8\" (UniqueName: \"kubernetes.io/projected/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-kube-api-access-c5sx8\") pod \"cinder-5b8e-account-create-update-2jw4d\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.302670 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "74150e86-16c7-41a4-819a-21dac1e87ef2" (UID: "74150e86-16c7-41a4-819a-21dac1e87ef2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run" (OuterVolumeSpecName: "var-run") pod "74150e86-16c7-41a4-819a-21dac1e87ef2" (UID: "74150e86-16c7-41a4-819a-21dac1e87ef2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303380 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "74150e86-16c7-41a4-819a-21dac1e87ef2" (UID: "74150e86-16c7-41a4-819a-21dac1e87ef2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303715 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "74150e86-16c7-41a4-819a-21dac1e87ef2" (UID: "74150e86-16c7-41a4-819a-21dac1e87ef2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.303971 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-scripts" (OuterVolumeSpecName: "scripts") pod "74150e86-16c7-41a4-819a-21dac1e87ef2" (UID: "74150e86-16c7-41a4-819a-21dac1e87ef2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.311558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74150e86-16c7-41a4-819a-21dac1e87ef2-kube-api-access-qdtrr" (OuterVolumeSpecName: "kube-api-access-qdtrr") pod "74150e86-16c7-41a4-819a-21dac1e87ef2" (UID: "74150e86-16c7-41a4-819a-21dac1e87ef2"). InnerVolumeSpecName "kube-api-access-qdtrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.367389 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xw5x9"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.368289 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.377511 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-81eb-account-create-update-7g4tb"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.382931 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.385286 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.404871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d43591-2a09-426b-b8e3-9f21dd334f70-operator-scripts\") pod \"barbican-db-create-mphtc\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.404943 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdpr9\" (UniqueName: \"kubernetes.io/projected/df6f3661-24bc-49dd-88fe-3bcf93ea1039-kube-api-access-pdpr9\") pod \"neutron-db-create-xw5x9\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405066 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6f3661-24bc-49dd-88fe-3bcf93ea1039-operator-scripts\") pod \"neutron-db-create-xw5x9\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-operator-scripts\") pod \"cinder-5b8e-account-create-update-2jw4d\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqjq\" (UniqueName: \"kubernetes.io/projected/a4d43591-2a09-426b-b8e3-9f21dd334f70-kube-api-access-hkqjq\") pod \"barbican-db-create-mphtc\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405192 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5sx8\" (UniqueName: \"kubernetes.io/projected/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-kube-api-access-c5sx8\") pod \"cinder-5b8e-account-create-update-2jw4d\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405256 4861 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405275 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtrr\" (UniqueName: \"kubernetes.io/projected/74150e86-16c7-41a4-819a-21dac1e87ef2-kube-api-access-qdtrr\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405615 4861 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405627 4861 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405636 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74150e86-16c7-41a4-819a-21dac1e87ef2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.405652 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/74150e86-16c7-41a4-819a-21dac1e87ef2-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.407040 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d43591-2a09-426b-b8e3-9f21dd334f70-operator-scripts\") pod \"barbican-db-create-mphtc\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.410609 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-operator-scripts\") pod \"cinder-5b8e-account-create-update-2jw4d\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.438550 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xw5x9"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.449342 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqjq\" (UniqueName: \"kubernetes.io/projected/a4d43591-2a09-426b-b8e3-9f21dd334f70-kube-api-access-hkqjq\") pod \"barbican-db-create-mphtc\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.473856 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5sx8\" (UniqueName: \"kubernetes.io/projected/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-kube-api-access-c5sx8\") pod \"cinder-5b8e-account-create-update-2jw4d\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.490801 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xs25z"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.501670 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.509621 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6f3661-24bc-49dd-88fe-3bcf93ea1039-operator-scripts\") pod \"neutron-db-create-xw5x9\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.509757 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.509938 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bsjbt" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.510152 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.510173 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrl8x\" (UniqueName: \"kubernetes.io/projected/085bf4b3-5af6-47a0-93b3-0d604f524213-kube-api-access-hrl8x\") pod \"barbican-81eb-account-create-update-7g4tb\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.510260 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.510403 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085bf4b3-5af6-47a0-93b3-0d604f524213-operator-scripts\") pod \"barbican-81eb-account-create-update-7g4tb\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.510495 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdpr9\" (UniqueName: \"kubernetes.io/projected/df6f3661-24bc-49dd-88fe-3bcf93ea1039-kube-api-access-pdpr9\") pod \"neutron-db-create-xw5x9\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.510434 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.514639 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xs25z"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.525359 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81eb-account-create-update-7g4tb"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.525874 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6f3661-24bc-49dd-88fe-3bcf93ea1039-operator-scripts\") pod \"neutron-db-create-xw5x9\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.543818 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdpr9\" (UniqueName: \"kubernetes.io/projected/df6f3661-24bc-49dd-88fe-3bcf93ea1039-kube-api-access-pdpr9\") pod \"neutron-db-create-xw5x9\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.544851 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cd89-account-create-update-n89tm"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.545941 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.550014 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.553973 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd89-account-create-update-n89tm"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.611831 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fb4\" (UniqueName: \"kubernetes.io/projected/5d97e359-59fa-474e-9ee0-7306cf96cb15-kube-api-access-98fb4\") pod \"neutron-cd89-account-create-update-n89tm\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.612213 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d97e359-59fa-474e-9ee0-7306cf96cb15-operator-scripts\") pod \"neutron-cd89-account-create-update-n89tm\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.612282 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-combined-ca-bundle\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.612313 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085bf4b3-5af6-47a0-93b3-0d604f524213-operator-scripts\") pod \"barbican-81eb-account-create-update-7g4tb\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.612588 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-config-data\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.613026 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085bf4b3-5af6-47a0-93b3-0d604f524213-operator-scripts\") pod \"barbican-81eb-account-create-update-7g4tb\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.613065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4rxb\" (UniqueName: \"kubernetes.io/projected/29995010-b4b2-4d35-95ed-8a7205e9228b-kube-api-access-j4rxb\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.613278 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrl8x\" (UniqueName: \"kubernetes.io/projected/085bf4b3-5af6-47a0-93b3-0d604f524213-kube-api-access-hrl8x\") pod \"barbican-81eb-account-create-update-7g4tb\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.633194 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrl8x\" (UniqueName: \"kubernetes.io/projected/085bf4b3-5af6-47a0-93b3-0d604f524213-kube-api-access-hrl8x\") pod \"barbican-81eb-account-create-update-7g4tb\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.678873 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.696008 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.715154 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.716635 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4rxb\" (UniqueName: \"kubernetes.io/projected/29995010-b4b2-4d35-95ed-8a7205e9228b-kube-api-access-j4rxb\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.716795 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fb4\" (UniqueName: \"kubernetes.io/projected/5d97e359-59fa-474e-9ee0-7306cf96cb15-kube-api-access-98fb4\") pod \"neutron-cd89-account-create-update-n89tm\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.716834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d97e359-59fa-474e-9ee0-7306cf96cb15-operator-scripts\") pod \"neutron-cd89-account-create-update-n89tm\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.716864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-combined-ca-bundle\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.716903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-config-data\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.717817 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d97e359-59fa-474e-9ee0-7306cf96cb15-operator-scripts\") pod \"neutron-cd89-account-create-update-n89tm\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.723152 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-combined-ca-bundle\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.723291 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-config-data\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.738936 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fb4\" (UniqueName: \"kubernetes.io/projected/5d97e359-59fa-474e-9ee0-7306cf96cb15-kube-api-access-98fb4\") pod \"neutron-cd89-account-create-update-n89tm\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.740529 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4rxb\" (UniqueName: \"kubernetes.io/projected/29995010-b4b2-4d35-95ed-8a7205e9228b-kube-api-access-j4rxb\") pod \"keystone-db-sync-xs25z\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.760304 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h-config-5qscf" event={"ID":"74150e86-16c7-41a4-819a-21dac1e87ef2","Type":"ContainerDied","Data":"4c5b85c85dfda9c8751b6000be27362c3463c2925e33f695c9c6a4ebbc701fc0"} Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.760347 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5b85c85dfda9c8751b6000be27362c3463c2925e33f695c9c6a4ebbc701fc0" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.760412 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h-config-5qscf" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.763601 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mfh4r"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.776693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"70048cfeceaa1bd3b11260d15e755776cfbd5fd6d7ef0d9d90e3d8c6f4261932"} Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.776746 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mfh4r"] Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.864796 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.880184 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:42 crc kubenswrapper[4861]: I0219 13:29:42.898759 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4qhvt"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.077173 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b8e-account-create-update-2jw4d"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.147764 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mphtc"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.247033 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xw5x9"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.361951 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81eb-account-create-update-7g4tb"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.372685 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q996h-config-5qscf"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.378945 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q996h-config-5qscf"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.485145 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xs25z"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.591908 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd89-account-create-update-n89tm"] Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.796190 4861 generic.go:334] "Generic (PLEG): container finished" podID="9a0a1119-88df-47db-9b04-87d908da605d" containerID="93d6530faa073831764d3b6a6e2012de0e7a58e9406a03d16933dc2ce80273b1" exitCode=0 Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.796668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4qhvt" event={"ID":"9a0a1119-88df-47db-9b04-87d908da605d","Type":"ContainerDied","Data":"93d6530faa073831764d3b6a6e2012de0e7a58e9406a03d16933dc2ce80273b1"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.796694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4qhvt" event={"ID":"9a0a1119-88df-47db-9b04-87d908da605d","Type":"ContainerStarted","Data":"485026e08eeeb301f62464f438aa58015ea9a9a9d636fb697b5239872d00606d"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.804879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xs25z" event={"ID":"29995010-b4b2-4d35-95ed-8a7205e9228b","Type":"ContainerStarted","Data":"b406df54b8854671fbd5959954ecb1097c831c1c7d43c5ab774437833cf1be5e"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.812245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81eb-account-create-update-7g4tb" event={"ID":"085bf4b3-5af6-47a0-93b3-0d604f524213","Type":"ContainerStarted","Data":"a79d8d5990c909b4d1ebda07a40d4624a84c7083f08f08a3ff52b26c7e1f62c9"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.814029 4861 generic.go:334] "Generic (PLEG): container finished" podID="b9f98e55-e80f-4615-b98a-fffbfc9d19f1" containerID="83eeb62461cea6c158459185de1625c29a6f4905f906e0bfe7e66334571391f7" exitCode=0 Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.814081 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b8e-account-create-update-2jw4d" event={"ID":"b9f98e55-e80f-4615-b98a-fffbfc9d19f1","Type":"ContainerDied","Data":"83eeb62461cea6c158459185de1625c29a6f4905f906e0bfe7e66334571391f7"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.814097 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b8e-account-create-update-2jw4d" event={"ID":"b9f98e55-e80f-4615-b98a-fffbfc9d19f1","Type":"ContainerStarted","Data":"ad367ef0e0f45665e00c31100515a3c8e87d80ec01a8c0e788e48ed7c65d4661"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.817161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xw5x9" event={"ID":"df6f3661-24bc-49dd-88fe-3bcf93ea1039","Type":"ContainerStarted","Data":"259b1c95cde1092715a495eb5ad69de70e98e50f63e7ee31669daddc616bac93"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.820648 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd89-account-create-update-n89tm" event={"ID":"5d97e359-59fa-474e-9ee0-7306cf96cb15","Type":"ContainerStarted","Data":"fc47256420a0fbcb9f2cdf78112f6bdce73d9d5b742aa32679eafd2f7f1c0b07"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.834182 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mphtc" event={"ID":"a4d43591-2a09-426b-b8e3-9f21dd334f70","Type":"ContainerStarted","Data":"dcda4a7464c9747fcc8ecaf607fb01b87db1ea934e10d8d0b6eca96353e2a089"} Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.988530 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74150e86-16c7-41a4-819a-21dac1e87ef2" path="/var/lib/kubelet/pods/74150e86-16c7-41a4-819a-21dac1e87ef2/volumes" Feb 19 13:29:43 crc kubenswrapper[4861]: I0219 13:29:43.989407 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09c18e7-59e1-4960-96d9-edfe82f826b3" path="/var/lib/kubelet/pods/d09c18e7-59e1-4960-96d9-edfe82f826b3/volumes" Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.452262 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q996h" Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.854532 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"471372c27c39aaaf08c9ce1b7cd61b51c8ece5bab05fb8d039a85d6af20abc96"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.854893 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"b01b12c10d17a6ede4967ef04b12864fea0898dee251a363634450764aacdd72"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.854907 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"f9f19bdef3fa838ce4b4f8189100aaf397a995f17cf865eefb4002eeec03180e"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.854917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"65e1cc2bfc85c23034b910f4d14189e38743412528a40dc9a26a2ca6c1041afb"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.869926 4861 generic.go:334] "Generic (PLEG): container finished" podID="085bf4b3-5af6-47a0-93b3-0d604f524213" containerID="a4018bf193243f2b017e4cbd10a8187b15394d7ed7645d11eae387d69ed13523" exitCode=0 Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.870051 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81eb-account-create-update-7g4tb" event={"ID":"085bf4b3-5af6-47a0-93b3-0d604f524213","Type":"ContainerDied","Data":"a4018bf193243f2b017e4cbd10a8187b15394d7ed7645d11eae387d69ed13523"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.889788 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d97e359-59fa-474e-9ee0-7306cf96cb15" containerID="9b2efdec34339881eb22ca3a3a2436a224dbef6c449beeb02a6654e5cd184326" exitCode=0 Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.889990 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd89-account-create-update-n89tm" event={"ID":"5d97e359-59fa-474e-9ee0-7306cf96cb15","Type":"ContainerDied","Data":"9b2efdec34339881eb22ca3a3a2436a224dbef6c449beeb02a6654e5cd184326"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.904238 4861 generic.go:334] "Generic (PLEG): container finished" podID="df6f3661-24bc-49dd-88fe-3bcf93ea1039" containerID="790dd63b441340ac26f9e12250b75100a8f825b05c2e22a06e124c3660ffb802" exitCode=0 Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.904322 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xw5x9" event={"ID":"df6f3661-24bc-49dd-88fe-3bcf93ea1039","Type":"ContainerDied","Data":"790dd63b441340ac26f9e12250b75100a8f825b05c2e22a06e124c3660ffb802"} Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.923690 4861 generic.go:334] "Generic (PLEG): container finished" podID="a4d43591-2a09-426b-b8e3-9f21dd334f70" containerID="3235b8e7854789d5e2993065dba25d19975febab696929922f11635bc158c664" exitCode=0 Feb 19 13:29:44 crc kubenswrapper[4861]: I0219 13:29:44.923934 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mphtc" event={"ID":"a4d43591-2a09-426b-b8e3-9f21dd334f70","Type":"ContainerDied","Data":"3235b8e7854789d5e2993065dba25d19975febab696929922f11635bc158c664"} Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.545292 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.553112 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.573673 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5sx8\" (UniqueName: \"kubernetes.io/projected/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-kube-api-access-c5sx8\") pod \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.573718 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cmbn\" (UniqueName: \"kubernetes.io/projected/9a0a1119-88df-47db-9b04-87d908da605d-kube-api-access-5cmbn\") pod \"9a0a1119-88df-47db-9b04-87d908da605d\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.573740 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-operator-scripts\") pod \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\" (UID: \"b9f98e55-e80f-4615-b98a-fffbfc9d19f1\") " Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.573800 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0a1119-88df-47db-9b04-87d908da605d-operator-scripts\") pod \"9a0a1119-88df-47db-9b04-87d908da605d\" (UID: \"9a0a1119-88df-47db-9b04-87d908da605d\") " Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.576531 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9f98e55-e80f-4615-b98a-fffbfc9d19f1" (UID: "b9f98e55-e80f-4615-b98a-fffbfc9d19f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.574772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0a1119-88df-47db-9b04-87d908da605d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a0a1119-88df-47db-9b04-87d908da605d" (UID: "9a0a1119-88df-47db-9b04-87d908da605d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.583780 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-kube-api-access-c5sx8" (OuterVolumeSpecName: "kube-api-access-c5sx8") pod "b9f98e55-e80f-4615-b98a-fffbfc9d19f1" (UID: "b9f98e55-e80f-4615-b98a-fffbfc9d19f1"). InnerVolumeSpecName "kube-api-access-c5sx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.599562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0a1119-88df-47db-9b04-87d908da605d-kube-api-access-5cmbn" (OuterVolumeSpecName: "kube-api-access-5cmbn") pod "9a0a1119-88df-47db-9b04-87d908da605d" (UID: "9a0a1119-88df-47db-9b04-87d908da605d"). InnerVolumeSpecName "kube-api-access-5cmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.676855 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0a1119-88df-47db-9b04-87d908da605d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.676891 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5sx8\" (UniqueName: \"kubernetes.io/projected/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-kube-api-access-c5sx8\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.676902 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cmbn\" (UniqueName: \"kubernetes.io/projected/9a0a1119-88df-47db-9b04-87d908da605d-kube-api-access-5cmbn\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.676910 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f98e55-e80f-4615-b98a-fffbfc9d19f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.939009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4qhvt" event={"ID":"9a0a1119-88df-47db-9b04-87d908da605d","Type":"ContainerDied","Data":"485026e08eeeb301f62464f438aa58015ea9a9a9d636fb697b5239872d00606d"} Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.939060 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485026e08eeeb301f62464f438aa58015ea9a9a9d636fb697b5239872d00606d" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.939128 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4qhvt" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.946108 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-2jw4d" Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.946536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b8e-account-create-update-2jw4d" event={"ID":"b9f98e55-e80f-4615-b98a-fffbfc9d19f1","Type":"ContainerDied","Data":"ad367ef0e0f45665e00c31100515a3c8e87d80ec01a8c0e788e48ed7c65d4661"} Feb 19 13:29:45 crc kubenswrapper[4861]: I0219 13:29:45.946574 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad367ef0e0f45665e00c31100515a3c8e87d80ec01a8c0e788e48ed7c65d4661" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.596261 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.605858 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.615756 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.662169 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706008 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d43591-2a09-426b-b8e3-9f21dd334f70-operator-scripts\") pod \"a4d43591-2a09-426b-b8e3-9f21dd334f70\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706072 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkqjq\" (UniqueName: \"kubernetes.io/projected/a4d43591-2a09-426b-b8e3-9f21dd334f70-kube-api-access-hkqjq\") pod \"a4d43591-2a09-426b-b8e3-9f21dd334f70\" (UID: \"a4d43591-2a09-426b-b8e3-9f21dd334f70\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706120 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrl8x\" (UniqueName: \"kubernetes.io/projected/085bf4b3-5af6-47a0-93b3-0d604f524213-kube-api-access-hrl8x\") pod \"085bf4b3-5af6-47a0-93b3-0d604f524213\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706804 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d97e359-59fa-474e-9ee0-7306cf96cb15-operator-scripts\") pod \"5d97e359-59fa-474e-9ee0-7306cf96cb15\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706938 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6f3661-24bc-49dd-88fe-3bcf93ea1039-operator-scripts\") pod \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706982 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d43591-2a09-426b-b8e3-9f21dd334f70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4d43591-2a09-426b-b8e3-9f21dd334f70" (UID: "a4d43591-2a09-426b-b8e3-9f21dd334f70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.706994 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdpr9\" (UniqueName: \"kubernetes.io/projected/df6f3661-24bc-49dd-88fe-3bcf93ea1039-kube-api-access-pdpr9\") pod \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\" (UID: \"df6f3661-24bc-49dd-88fe-3bcf93ea1039\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.707038 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98fb4\" (UniqueName: \"kubernetes.io/projected/5d97e359-59fa-474e-9ee0-7306cf96cb15-kube-api-access-98fb4\") pod \"5d97e359-59fa-474e-9ee0-7306cf96cb15\" (UID: \"5d97e359-59fa-474e-9ee0-7306cf96cb15\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.707065 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085bf4b3-5af6-47a0-93b3-0d604f524213-operator-scripts\") pod \"085bf4b3-5af6-47a0-93b3-0d604f524213\" (UID: \"085bf4b3-5af6-47a0-93b3-0d604f524213\") " Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.707535 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d43591-2a09-426b-b8e3-9f21dd334f70-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.707663 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d97e359-59fa-474e-9ee0-7306cf96cb15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d97e359-59fa-474e-9ee0-7306cf96cb15" (UID: "5d97e359-59fa-474e-9ee0-7306cf96cb15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.708549 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6f3661-24bc-49dd-88fe-3bcf93ea1039-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df6f3661-24bc-49dd-88fe-3bcf93ea1039" (UID: "df6f3661-24bc-49dd-88fe-3bcf93ea1039"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.711053 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085bf4b3-5af6-47a0-93b3-0d604f524213-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "085bf4b3-5af6-47a0-93b3-0d604f524213" (UID: "085bf4b3-5af6-47a0-93b3-0d604f524213"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.712375 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d43591-2a09-426b-b8e3-9f21dd334f70-kube-api-access-hkqjq" (OuterVolumeSpecName: "kube-api-access-hkqjq") pod "a4d43591-2a09-426b-b8e3-9f21dd334f70" (UID: "a4d43591-2a09-426b-b8e3-9f21dd334f70"). InnerVolumeSpecName "kube-api-access-hkqjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.712453 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d97e359-59fa-474e-9ee0-7306cf96cb15-kube-api-access-98fb4" (OuterVolumeSpecName: "kube-api-access-98fb4") pod "5d97e359-59fa-474e-9ee0-7306cf96cb15" (UID: "5d97e359-59fa-474e-9ee0-7306cf96cb15"). InnerVolumeSpecName "kube-api-access-98fb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.714122 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085bf4b3-5af6-47a0-93b3-0d604f524213-kube-api-access-hrl8x" (OuterVolumeSpecName: "kube-api-access-hrl8x") pod "085bf4b3-5af6-47a0-93b3-0d604f524213" (UID: "085bf4b3-5af6-47a0-93b3-0d604f524213"). InnerVolumeSpecName "kube-api-access-hrl8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.731251 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6f3661-24bc-49dd-88fe-3bcf93ea1039-kube-api-access-pdpr9" (OuterVolumeSpecName: "kube-api-access-pdpr9") pod "df6f3661-24bc-49dd-88fe-3bcf93ea1039" (UID: "df6f3661-24bc-49dd-88fe-3bcf93ea1039"). InnerVolumeSpecName "kube-api-access-pdpr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.808971 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6f3661-24bc-49dd-88fe-3bcf93ea1039-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.809215 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdpr9\" (UniqueName: \"kubernetes.io/projected/df6f3661-24bc-49dd-88fe-3bcf93ea1039-kube-api-access-pdpr9\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.809228 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98fb4\" (UniqueName: \"kubernetes.io/projected/5d97e359-59fa-474e-9ee0-7306cf96cb15-kube-api-access-98fb4\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.809236 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085bf4b3-5af6-47a0-93b3-0d604f524213-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.809245 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkqjq\" (UniqueName: \"kubernetes.io/projected/a4d43591-2a09-426b-b8e3-9f21dd334f70-kube-api-access-hkqjq\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.809253 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrl8x\" (UniqueName: \"kubernetes.io/projected/085bf4b3-5af6-47a0-93b3-0d604f524213-kube-api-access-hrl8x\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.809261 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d97e359-59fa-474e-9ee0-7306cf96cb15-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.954001 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd89-account-create-update-n89tm" event={"ID":"5d97e359-59fa-474e-9ee0-7306cf96cb15","Type":"ContainerDied","Data":"fc47256420a0fbcb9f2cdf78112f6bdce73d9d5b742aa32679eafd2f7f1c0b07"} Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.954018 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-n89tm" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.954038 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc47256420a0fbcb9f2cdf78112f6bdce73d9d5b742aa32679eafd2f7f1c0b07" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.955247 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xw5x9" event={"ID":"df6f3661-24bc-49dd-88fe-3bcf93ea1039","Type":"ContainerDied","Data":"259b1c95cde1092715a495eb5ad69de70e98e50f63e7ee31669daddc616bac93"} Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.955273 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259b1c95cde1092715a495eb5ad69de70e98e50f63e7ee31669daddc616bac93" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.955305 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xw5x9" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.957693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mphtc" event={"ID":"a4d43591-2a09-426b-b8e3-9f21dd334f70","Type":"ContainerDied","Data":"dcda4a7464c9747fcc8ecaf607fb01b87db1ea934e10d8d0b6eca96353e2a089"} Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.957713 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcda4a7464c9747fcc8ecaf607fb01b87db1ea934e10d8d0b6eca96353e2a089" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.957809 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mphtc" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.966340 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"5695ae030dccbdeb118599a52129cc9b7894cfbff564817156a7fdbf305aa0f0"} Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.967697 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81eb-account-create-update-7g4tb" event={"ID":"085bf4b3-5af6-47a0-93b3-0d604f524213","Type":"ContainerDied","Data":"a79d8d5990c909b4d1ebda07a40d4624a84c7083f08f08a3ff52b26c7e1f62c9"} Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.967725 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-7g4tb" Feb 19 13:29:46 crc kubenswrapper[4861]: I0219 13:29:46.967725 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a79d8d5990c909b4d1ebda07a40d4624a84c7083f08f08a3ff52b26c7e1f62c9" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.767934 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m8zt6"] Feb 19 13:29:47 crc kubenswrapper[4861]: E0219 13:29:47.768285 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f98e55-e80f-4615-b98a-fffbfc9d19f1" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768308 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f98e55-e80f-4615-b98a-fffbfc9d19f1" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: E0219 13:29:47.768320 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d97e359-59fa-474e-9ee0-7306cf96cb15" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768328 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d97e359-59fa-474e-9ee0-7306cf96cb15" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: E0219 13:29:47.768359 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6f3661-24bc-49dd-88fe-3bcf93ea1039" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768369 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6f3661-24bc-49dd-88fe-3bcf93ea1039" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: E0219 13:29:47.768386 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085bf4b3-5af6-47a0-93b3-0d604f524213" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768394 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="085bf4b3-5af6-47a0-93b3-0d604f524213" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: E0219 13:29:47.768407 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d43591-2a09-426b-b8e3-9f21dd334f70" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768416 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d43591-2a09-426b-b8e3-9f21dd334f70" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: E0219 13:29:47.768510 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0a1119-88df-47db-9b04-87d908da605d" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768518 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0a1119-88df-47db-9b04-87d908da605d" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768731 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d43591-2a09-426b-b8e3-9f21dd334f70" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768746 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0a1119-88df-47db-9b04-87d908da605d" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768759 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d97e359-59fa-474e-9ee0-7306cf96cb15" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768769 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6f3661-24bc-49dd-88fe-3bcf93ea1039" containerName="mariadb-database-create" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768778 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="085bf4b3-5af6-47a0-93b3-0d604f524213" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.768792 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f98e55-e80f-4615-b98a-fffbfc9d19f1" containerName="mariadb-account-create-update" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.769415 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.772576 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.795267 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m8zt6"] Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.825981 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31587d6-13d9-4e37-9273-423ee0fa9684-operator-scripts\") pod \"root-account-create-update-m8zt6\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.826036 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnht\" (UniqueName: \"kubernetes.io/projected/f31587d6-13d9-4e37-9273-423ee0fa9684-kube-api-access-clnht\") pod \"root-account-create-update-m8zt6\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.928031 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31587d6-13d9-4e37-9273-423ee0fa9684-operator-scripts\") pod \"root-account-create-update-m8zt6\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.928088 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnht\" (UniqueName: \"kubernetes.io/projected/f31587d6-13d9-4e37-9273-423ee0fa9684-kube-api-access-clnht\") pod \"root-account-create-update-m8zt6\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.928944 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31587d6-13d9-4e37-9273-423ee0fa9684-operator-scripts\") pod \"root-account-create-update-m8zt6\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:47 crc kubenswrapper[4861]: I0219 13:29:47.947232 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnht\" (UniqueName: \"kubernetes.io/projected/f31587d6-13d9-4e37-9273-423ee0fa9684-kube-api-access-clnht\") pod \"root-account-create-update-m8zt6\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:48 crc kubenswrapper[4861]: I0219 13:29:48.092781 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:51 crc kubenswrapper[4861]: I0219 13:29:51.581692 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m8zt6"] Feb 19 13:29:51 crc kubenswrapper[4861]: W0219 13:29:51.586923 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31587d6_13d9_4e37_9273_423ee0fa9684.slice/crio-54adbb743db834ae09772595cca4ae49689af434e4c04e32d1d8a4d77890422c WatchSource:0}: Error finding container 54adbb743db834ae09772595cca4ae49689af434e4c04e32d1d8a4d77890422c: Status 404 returned error can't find the container with id 54adbb743db834ae09772595cca4ae49689af434e4c04e32d1d8a4d77890422c Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.039845 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m8zt6" event={"ID":"f31587d6-13d9-4e37-9273-423ee0fa9684","Type":"ContainerStarted","Data":"d5cb0990a30fcd764ec9c5d555309b9cd846903ae77c9a686d62371226c13062"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.040247 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m8zt6" event={"ID":"f31587d6-13d9-4e37-9273-423ee0fa9684","Type":"ContainerStarted","Data":"54adbb743db834ae09772595cca4ae49689af434e4c04e32d1d8a4d77890422c"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.042052 4861 generic.go:334] "Generic (PLEG): container finished" podID="fdd07d83-8801-49de-a338-879cea293629" containerID="03c7958adbcf935e7fed370774cad0fe23d4461b437a22f750d78b8a60d51ddc" exitCode=0 Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.042107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8q5tv" event={"ID":"fdd07d83-8801-49de-a338-879cea293629","Type":"ContainerDied","Data":"03c7958adbcf935e7fed370774cad0fe23d4461b437a22f750d78b8a60d51ddc"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.054043 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"ef03fa4021e6c6150fbd214140c01f05e06bcc41b0e5602e90af2b70524e58cb"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.054091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"6c5b796933349019a3e6caaca60e24876d6caee6a8db308216a144cc2b4550b5"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.054102 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"da5f6285aca4973ac5f8147c034649ed6d304113cb58cb64d2cca749a0aa466b"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.054111 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"bb2e6ac221defcb0c3b930773236fdbf8bc57fd77635bc49c98f98a881dddf14"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.060352 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xs25z" event={"ID":"29995010-b4b2-4d35-95ed-8a7205e9228b","Type":"ContainerStarted","Data":"830e4d061e872db1e926fe1d01b80647ede540da98625dd3926876c3c4352b59"} Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.063039 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-m8zt6" podStartSLOduration=5.063023034 podStartE2EDuration="5.063023034s" podCreationTimestamp="2026-02-19 13:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:52.060022823 +0000 UTC m=+1206.721126051" watchObservedRunningTime="2026-02-19 13:29:52.063023034 +0000 UTC m=+1206.724126262" Feb 19 13:29:52 crc kubenswrapper[4861]: I0219 13:29:52.095997 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xs25z" podStartSLOduration=2.573961262 podStartE2EDuration="10.095981751s" podCreationTimestamp="2026-02-19 13:29:42 +0000 UTC" firstStartedPulling="2026-02-19 13:29:43.719007568 +0000 UTC m=+1198.380110796" lastFinishedPulling="2026-02-19 13:29:51.241028027 +0000 UTC m=+1205.902131285" observedRunningTime="2026-02-19 13:29:52.092042085 +0000 UTC m=+1206.753145313" watchObservedRunningTime="2026-02-19 13:29:52.095981751 +0000 UTC m=+1206.757084979" Feb 19 13:29:53 crc kubenswrapper[4861]: I0219 13:29:53.077038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"0ccff9712c241358663a5e8a3f82a05a5ae9907961c0054adefa2af45c1b18a1"} Feb 19 13:29:53 crc kubenswrapper[4861]: I0219 13:29:53.077284 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerStarted","Data":"e8e2594239d50333d43b08ef764dd15a54e631448517f1f5b7fde345bc50b2f2"} Feb 19 13:29:53 crc kubenswrapper[4861]: I0219 13:29:53.078344 4861 generic.go:334] "Generic (PLEG): container finished" podID="f31587d6-13d9-4e37-9273-423ee0fa9684" containerID="d5cb0990a30fcd764ec9c5d555309b9cd846903ae77c9a686d62371226c13062" exitCode=0 Feb 19 13:29:53 crc kubenswrapper[4861]: I0219 13:29:53.078564 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m8zt6" event={"ID":"f31587d6-13d9-4e37-9273-423ee0fa9684","Type":"ContainerDied","Data":"d5cb0990a30fcd764ec9c5d555309b9cd846903ae77c9a686d62371226c13062"} Feb 19 13:29:53 crc kubenswrapper[4861]: I0219 13:29:53.123251 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.654974455 podStartE2EDuration="37.123233195s" podCreationTimestamp="2026-02-19 13:29:16 +0000 UTC" firstStartedPulling="2026-02-19 13:29:40.053707722 +0000 UTC m=+1194.714810950" lastFinishedPulling="2026-02-19 13:29:46.521966472 +0000 UTC m=+1201.183069690" observedRunningTime="2026-02-19 13:29:53.11823223 +0000 UTC m=+1207.779335458" watchObservedRunningTime="2026-02-19 13:29:53.123233195 +0000 UTC m=+1207.784336443" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.491451 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84f584987c-vqrl6"] Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.493177 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.496099 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.508943 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f584987c-vqrl6"] Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.546763 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.577814 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mqh\" (UniqueName: \"kubernetes.io/projected/18292d4a-bf83-444c-8474-d8d7dd0216f2-kube-api-access-d5mqh\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.577968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-config\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.578012 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-sb\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.578227 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-nb\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.578326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-svc\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.578404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-swift-storage-0\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.697091 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-combined-ca-bundle\") pod \"fdd07d83-8801-49de-a338-879cea293629\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.697468 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-db-sync-config-data\") pod \"fdd07d83-8801-49de-a338-879cea293629\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.697603 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpkbv\" (UniqueName: \"kubernetes.io/projected/fdd07d83-8801-49de-a338-879cea293629-kube-api-access-jpkbv\") pod \"fdd07d83-8801-49de-a338-879cea293629\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.697678 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-config-data\") pod \"fdd07d83-8801-49de-a338-879cea293629\" (UID: \"fdd07d83-8801-49de-a338-879cea293629\") " Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.697979 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mqh\" (UniqueName: \"kubernetes.io/projected/18292d4a-bf83-444c-8474-d8d7dd0216f2-kube-api-access-d5mqh\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.698035 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-config\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.698056 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-sb\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.698123 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-nb\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.698162 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-svc\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.698196 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-swift-storage-0\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.699105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-sb\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.699365 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-swift-storage-0\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.699468 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-config\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.699606 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-svc\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.700789 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-nb\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.705214 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fdd07d83-8801-49de-a338-879cea293629" (UID: "fdd07d83-8801-49de-a338-879cea293629"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.717199 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd07d83-8801-49de-a338-879cea293629-kube-api-access-jpkbv" (OuterVolumeSpecName: "kube-api-access-jpkbv") pod "fdd07d83-8801-49de-a338-879cea293629" (UID: "fdd07d83-8801-49de-a338-879cea293629"). InnerVolumeSpecName "kube-api-access-jpkbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.721402 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mqh\" (UniqueName: \"kubernetes.io/projected/18292d4a-bf83-444c-8474-d8d7dd0216f2-kube-api-access-d5mqh\") pod \"dnsmasq-dns-84f584987c-vqrl6\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.734589 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdd07d83-8801-49de-a338-879cea293629" (UID: "fdd07d83-8801-49de-a338-879cea293629"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.756551 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-config-data" (OuterVolumeSpecName: "config-data") pod "fdd07d83-8801-49de-a338-879cea293629" (UID: "fdd07d83-8801-49de-a338-879cea293629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.801496 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpkbv\" (UniqueName: \"kubernetes.io/projected/fdd07d83-8801-49de-a338-879cea293629-kube-api-access-jpkbv\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.801553 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.801567 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.801579 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdd07d83-8801-49de-a338-879cea293629-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:53.856122 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.099719 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8q5tv" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.100376 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8q5tv" event={"ID":"fdd07d83-8801-49de-a338-879cea293629","Type":"ContainerDied","Data":"868dd7a6f7a4fec95d3d771d7155bfa9d7cd19cff859e80489fc1fc65a5c7e6b"} Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.100393 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868dd7a6f7a4fec95d3d771d7155bfa9d7cd19cff859e80489fc1fc65a5c7e6b" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.455430 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f584987c-vqrl6"] Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.480936 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f584987c-vqrl6"] Feb 19 13:29:54 crc kubenswrapper[4861]: W0219 13:29:54.500108 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18292d4a_bf83_444c_8474_d8d7dd0216f2.slice/crio-fef6bb72978469a612d3617bb55cc639d2a4788a6f505d18bce9785b04e0956e WatchSource:0}: Error finding container fef6bb72978469a612d3617bb55cc639d2a4788a6f505d18bce9785b04e0956e: Status 404 returned error can't find the container with id fef6bb72978469a612d3617bb55cc639d2a4788a6f505d18bce9785b04e0956e Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.509682 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-dh6nd"] Feb 19 13:29:54 crc kubenswrapper[4861]: E0219 13:29:54.510047 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd07d83-8801-49de-a338-879cea293629" containerName="glance-db-sync" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.510064 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd07d83-8801-49de-a338-879cea293629" containerName="glance-db-sync" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.510294 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd07d83-8801-49de-a338-879cea293629" containerName="glance-db-sync" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.511262 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.540940 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-dh6nd"] Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.627263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-config\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.627328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-svc\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.627362 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.627410 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2tsn\" (UniqueName: \"kubernetes.io/projected/eeb8834e-338d-48b9-bc07-de1b41567e79-kube-api-access-h2tsn\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.627443 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.627467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.635363 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.730922 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31587d6-13d9-4e37-9273-423ee0fa9684-operator-scripts\") pod \"f31587d6-13d9-4e37-9273-423ee0fa9684\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.731091 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clnht\" (UniqueName: \"kubernetes.io/projected/f31587d6-13d9-4e37-9273-423ee0fa9684-kube-api-access-clnht\") pod \"f31587d6-13d9-4e37-9273-423ee0fa9684\" (UID: \"f31587d6-13d9-4e37-9273-423ee0fa9684\") " Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.731365 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-config\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.731412 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-svc\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.732307 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-svc\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.732369 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.732448 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2tsn\" (UniqueName: \"kubernetes.io/projected/eeb8834e-338d-48b9-bc07-de1b41567e79-kube-api-access-h2tsn\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.732472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.732501 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.733156 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.733714 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.734475 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.738226 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-config\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.738732 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31587d6-13d9-4e37-9273-423ee0fa9684-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f31587d6-13d9-4e37-9273-423ee0fa9684" (UID: "f31587d6-13d9-4e37-9273-423ee0fa9684"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.789586 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31587d6-13d9-4e37-9273-423ee0fa9684-kube-api-access-clnht" (OuterVolumeSpecName: "kube-api-access-clnht") pod "f31587d6-13d9-4e37-9273-423ee0fa9684" (UID: "f31587d6-13d9-4e37-9273-423ee0fa9684"). InnerVolumeSpecName "kube-api-access-clnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.790453 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2tsn\" (UniqueName: \"kubernetes.io/projected/eeb8834e-338d-48b9-bc07-de1b41567e79-kube-api-access-h2tsn\") pod \"dnsmasq-dns-69577ff67f-dh6nd\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.800480 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.834234 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31587d6-13d9-4e37-9273-423ee0fa9684-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:54 crc kubenswrapper[4861]: I0219 13:29:54.834269 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clnht\" (UniqueName: \"kubernetes.io/projected/f31587d6-13d9-4e37-9273-423ee0fa9684-kube-api-access-clnht\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.108237 4861 generic.go:334] "Generic (PLEG): container finished" podID="18292d4a-bf83-444c-8474-d8d7dd0216f2" containerID="0e843e77b87bebc6b3abfc806c8dd351d49dbe6080624eeddfab6982f60adbd8" exitCode=0 Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.108344 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" event={"ID":"18292d4a-bf83-444c-8474-d8d7dd0216f2","Type":"ContainerDied","Data":"0e843e77b87bebc6b3abfc806c8dd351d49dbe6080624eeddfab6982f60adbd8"} Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.108636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" event={"ID":"18292d4a-bf83-444c-8474-d8d7dd0216f2","Type":"ContainerStarted","Data":"fef6bb72978469a612d3617bb55cc639d2a4788a6f505d18bce9785b04e0956e"} Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.110146 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m8zt6" event={"ID":"f31587d6-13d9-4e37-9273-423ee0fa9684","Type":"ContainerDied","Data":"54adbb743db834ae09772595cca4ae49689af434e4c04e32d1d8a4d77890422c"} Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.110166 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54adbb743db834ae09772595cca4ae49689af434e4c04e32d1d8a4d77890422c" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.110204 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m8zt6" Feb 19 13:29:55 crc kubenswrapper[4861]: W0219 13:29:55.267791 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb8834e_338d_48b9_bc07_de1b41567e79.slice/crio-ffafa34521de3b5a14da709293d948e7b5468321866dbc1d430c7fa05e55b57c WatchSource:0}: Error finding container ffafa34521de3b5a14da709293d948e7b5468321866dbc1d430c7fa05e55b57c: Status 404 returned error can't find the container with id ffafa34521de3b5a14da709293d948e7b5468321866dbc1d430c7fa05e55b57c Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.268751 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-dh6nd"] Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.398823 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.548253 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-swift-storage-0\") pod \"18292d4a-bf83-444c-8474-d8d7dd0216f2\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.548315 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5mqh\" (UniqueName: \"kubernetes.io/projected/18292d4a-bf83-444c-8474-d8d7dd0216f2-kube-api-access-d5mqh\") pod \"18292d4a-bf83-444c-8474-d8d7dd0216f2\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.548345 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-nb\") pod \"18292d4a-bf83-444c-8474-d8d7dd0216f2\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.548395 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-config\") pod \"18292d4a-bf83-444c-8474-d8d7dd0216f2\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.548465 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-sb\") pod \"18292d4a-bf83-444c-8474-d8d7dd0216f2\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.548511 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-svc\") pod \"18292d4a-bf83-444c-8474-d8d7dd0216f2\" (UID: \"18292d4a-bf83-444c-8474-d8d7dd0216f2\") " Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.553131 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18292d4a-bf83-444c-8474-d8d7dd0216f2-kube-api-access-d5mqh" (OuterVolumeSpecName: "kube-api-access-d5mqh") pod "18292d4a-bf83-444c-8474-d8d7dd0216f2" (UID: "18292d4a-bf83-444c-8474-d8d7dd0216f2"). InnerVolumeSpecName "kube-api-access-d5mqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.570778 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18292d4a-bf83-444c-8474-d8d7dd0216f2" (UID: "18292d4a-bf83-444c-8474-d8d7dd0216f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.571121 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18292d4a-bf83-444c-8474-d8d7dd0216f2" (UID: "18292d4a-bf83-444c-8474-d8d7dd0216f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.572646 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-config" (OuterVolumeSpecName: "config") pod "18292d4a-bf83-444c-8474-d8d7dd0216f2" (UID: "18292d4a-bf83-444c-8474-d8d7dd0216f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.572825 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "18292d4a-bf83-444c-8474-d8d7dd0216f2" (UID: "18292d4a-bf83-444c-8474-d8d7dd0216f2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.573784 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18292d4a-bf83-444c-8474-d8d7dd0216f2" (UID: "18292d4a-bf83-444c-8474-d8d7dd0216f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.650826 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.650866 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.650879 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.650891 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.650903 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5mqh\" (UniqueName: \"kubernetes.io/projected/18292d4a-bf83-444c-8474-d8d7dd0216f2-kube-api-access-d5mqh\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:55 crc kubenswrapper[4861]: I0219 13:29:55.650913 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18292d4a-bf83-444c-8474-d8d7dd0216f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.122379 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" event={"ID":"18292d4a-bf83-444c-8474-d8d7dd0216f2","Type":"ContainerDied","Data":"fef6bb72978469a612d3617bb55cc639d2a4788a6f505d18bce9785b04e0956e"} Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.122779 4861 scope.go:117] "RemoveContainer" containerID="0e843e77b87bebc6b3abfc806c8dd351d49dbe6080624eeddfab6982f60adbd8" Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.122447 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f584987c-vqrl6" Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.129970 4861 generic.go:334] "Generic (PLEG): container finished" podID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerID="e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875" exitCode=0 Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.130071 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" event={"ID":"eeb8834e-338d-48b9-bc07-de1b41567e79","Type":"ContainerDied","Data":"e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875"} Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.130177 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" event={"ID":"eeb8834e-338d-48b9-bc07-de1b41567e79","Type":"ContainerStarted","Data":"ffafa34521de3b5a14da709293d948e7b5468321866dbc1d430c7fa05e55b57c"} Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.146362 4861 generic.go:334] "Generic (PLEG): container finished" podID="29995010-b4b2-4d35-95ed-8a7205e9228b" containerID="830e4d061e872db1e926fe1d01b80647ede540da98625dd3926876c3c4352b59" exitCode=0 Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.146415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xs25z" event={"ID":"29995010-b4b2-4d35-95ed-8a7205e9228b","Type":"ContainerDied","Data":"830e4d061e872db1e926fe1d01b80647ede540da98625dd3926876c3c4352b59"} Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.206356 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f584987c-vqrl6"] Feb 19 13:29:56 crc kubenswrapper[4861]: I0219 13:29:56.218974 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84f584987c-vqrl6"] Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.156367 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" event={"ID":"eeb8834e-338d-48b9-bc07-de1b41567e79","Type":"ContainerStarted","Data":"7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9"} Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.156740 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.195221 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" podStartSLOduration=3.195184623 podStartE2EDuration="3.195184623s" podCreationTimestamp="2026-02-19 13:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:29:57.185989376 +0000 UTC m=+1211.847092614" watchObservedRunningTime="2026-02-19 13:29:57.195184623 +0000 UTC m=+1211.856287901" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.604256 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.781528 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4rxb\" (UniqueName: \"kubernetes.io/projected/29995010-b4b2-4d35-95ed-8a7205e9228b-kube-api-access-j4rxb\") pod \"29995010-b4b2-4d35-95ed-8a7205e9228b\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.781665 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-combined-ca-bundle\") pod \"29995010-b4b2-4d35-95ed-8a7205e9228b\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.781804 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-config-data\") pod \"29995010-b4b2-4d35-95ed-8a7205e9228b\" (UID: \"29995010-b4b2-4d35-95ed-8a7205e9228b\") " Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.789328 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29995010-b4b2-4d35-95ed-8a7205e9228b-kube-api-access-j4rxb" (OuterVolumeSpecName: "kube-api-access-j4rxb") pod "29995010-b4b2-4d35-95ed-8a7205e9228b" (UID: "29995010-b4b2-4d35-95ed-8a7205e9228b"). InnerVolumeSpecName "kube-api-access-j4rxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.817390 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29995010-b4b2-4d35-95ed-8a7205e9228b" (UID: "29995010-b4b2-4d35-95ed-8a7205e9228b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.851526 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-config-data" (OuterVolumeSpecName: "config-data") pod "29995010-b4b2-4d35-95ed-8a7205e9228b" (UID: "29995010-b4b2-4d35-95ed-8a7205e9228b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.883743 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4rxb\" (UniqueName: \"kubernetes.io/projected/29995010-b4b2-4d35-95ed-8a7205e9228b-kube-api-access-j4rxb\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.883782 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.883791 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29995010-b4b2-4d35-95ed-8a7205e9228b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:57 crc kubenswrapper[4861]: I0219 13:29:57.995099 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18292d4a-bf83-444c-8474-d8d7dd0216f2" path="/var/lib/kubelet/pods/18292d4a-bf83-444c-8474-d8d7dd0216f2/volumes" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.167380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xs25z" event={"ID":"29995010-b4b2-4d35-95ed-8a7205e9228b","Type":"ContainerDied","Data":"b406df54b8854671fbd5959954ecb1097c831c1c7d43c5ab774437833cf1be5e"} Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.167471 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b406df54b8854671fbd5959954ecb1097c831c1c7d43c5ab774437833cf1be5e" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.167513 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xs25z" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.436680 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mp8nl"] Feb 19 13:29:58 crc kubenswrapper[4861]: E0219 13:29:58.436998 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31587d6-13d9-4e37-9273-423ee0fa9684" containerName="mariadb-account-create-update" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437015 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31587d6-13d9-4e37-9273-423ee0fa9684" containerName="mariadb-account-create-update" Feb 19 13:29:58 crc kubenswrapper[4861]: E0219 13:29:58.437027 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29995010-b4b2-4d35-95ed-8a7205e9228b" containerName="keystone-db-sync" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437034 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="29995010-b4b2-4d35-95ed-8a7205e9228b" containerName="keystone-db-sync" Feb 19 13:29:58 crc kubenswrapper[4861]: E0219 13:29:58.437047 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18292d4a-bf83-444c-8474-d8d7dd0216f2" containerName="init" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437054 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="18292d4a-bf83-444c-8474-d8d7dd0216f2" containerName="init" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437212 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31587d6-13d9-4e37-9273-423ee0fa9684" containerName="mariadb-account-create-update" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437236 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="29995010-b4b2-4d35-95ed-8a7205e9228b" containerName="keystone-db-sync" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437249 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="18292d4a-bf83-444c-8474-d8d7dd0216f2" containerName="init" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.437979 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.441225 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.493028 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.493177 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.493312 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bsjbt" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.493432 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.499176 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mp8nl"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.516261 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-dh6nd"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.544122 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-zm6tg"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.545308 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.596996 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-combined-ca-bundle\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.597080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c962\" (UniqueName: \"kubernetes.io/projected/bf9e804e-7f05-4152-b4ef-856337ebb9a7-kube-api-access-5c962\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.597122 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-fernet-keys\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.597166 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-config-data\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.597187 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-scripts\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.597209 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-credential-keys\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.644135 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kwkgq"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.645104 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.652312 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4km8j" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.652325 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.652358 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.665507 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kwkgq"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.672000 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-zc7qs"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.674180 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.678269 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.678591 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-npmhq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.678721 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698331 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-config-data\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698366 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-scripts\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-credential-keys\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698412 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-config\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698473 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698494 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-combined-ca-bundle\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698520 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698606 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698626 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c962\" (UniqueName: \"kubernetes.io/projected/bf9e804e-7f05-4152-b4ef-856337ebb9a7-kube-api-access-5c962\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698652 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlk4\" (UniqueName: \"kubernetes.io/projected/18a121cb-1f5d-4335-84c1-783b4ef39908-kube-api-access-bmlk4\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698676 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-fernet-keys\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.698692 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.706577 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zc7qs"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.716248 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-fernet-keys\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.717764 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-credential-keys\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.718118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-scripts\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.725131 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c962\" (UniqueName: \"kubernetes.io/projected/bf9e804e-7f05-4152-b4ef-856337ebb9a7-kube-api-access-5c962\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.728103 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-config-data\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.729352 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-combined-ca-bundle\") pod \"keystone-bootstrap-mp8nl\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.748514 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-zm6tg"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-combined-ca-bundle\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800489 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l28\" (UniqueName: \"kubernetes.io/projected/48707538-eeb6-42d9-918f-6b22a07cae71-kube-api-access-v2l28\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800531 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-config\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800555 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-config\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800591 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphzw\" (UniqueName: \"kubernetes.io/projected/cfab12c1-cdb5-415f-8290-4d057a940b1a-kube-api-access-qphzw\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800617 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800648 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800666 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-config-data\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800685 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-scripts\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800743 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cfab12c1-cdb5-415f-8290-4d057a940b1a-etc-machine-id\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800760 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-db-sync-config-data\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800779 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-combined-ca-bundle\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlk4\" (UniqueName: \"kubernetes.io/projected/18a121cb-1f5d-4335-84c1-783b4ef39908-kube-api-access-bmlk4\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.800846 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.801842 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.802376 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-config\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.803064 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.803614 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.804190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.804722 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hblgk"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.804789 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.810494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.814795 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jfbdv" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.815125 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.824441 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hblgk"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.874256 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlk4\" (UniqueName: \"kubernetes.io/projected/18a121cb-1f5d-4335-84c1-783b4ef39908-kube-api-access-bmlk4\") pod \"dnsmasq-dns-84f6cc7f47-zm6tg\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.884671 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-zm6tg"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.885561 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907254 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cfab12c1-cdb5-415f-8290-4d057a940b1a-etc-machine-id\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907301 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-db-sync-config-data\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-combined-ca-bundle\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907349 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-combined-ca-bundle\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907385 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-combined-ca-bundle\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907439 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2l28\" (UniqueName: \"kubernetes.io/projected/48707538-eeb6-42d9-918f-6b22a07cae71-kube-api-access-v2l28\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907469 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-config\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-db-sync-config-data\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907501 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz52d\" (UniqueName: \"kubernetes.io/projected/0c15a88f-af04-496c-bc54-f001ba15580a-kube-api-access-kz52d\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907525 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qphzw\" (UniqueName: \"kubernetes.io/projected/cfab12c1-cdb5-415f-8290-4d057a940b1a-kube-api-access-qphzw\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907561 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-config-data\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.907577 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-scripts\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.910882 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cfab12c1-cdb5-415f-8290-4d057a940b1a-etc-machine-id\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.911454 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.913559 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.919737 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.919927 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.943975 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-combined-ca-bundle\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.944062 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.961887 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphzw\" (UniqueName: \"kubernetes.io/projected/cfab12c1-cdb5-415f-8290-4d057a940b1a-kube-api-access-qphzw\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.965840 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f9zrp"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.966901 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.970525 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.970718 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.970814 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-td4kr" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.990308 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2l28\" (UniqueName: \"kubernetes.io/projected/48707538-eeb6-42d9-918f-6b22a07cae71-kube-api-access-v2l28\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.991091 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f9zrp"] Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.993858 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-db-sync-config-data\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.994443 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-combined-ca-bundle\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.994550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-config\") pod \"neutron-db-sync-kwkgq\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:58 crc kubenswrapper[4861]: I0219 13:29:58.995716 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-scripts\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.000225 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-config-data\") pod \"cinder-db-sync-zc7qs\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.006571 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-sx7qd"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.007949 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.008823 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-scripts\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.008883 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-db-sync-config-data\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.008903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz52d\" (UniqueName: \"kubernetes.io/projected/0c15a88f-af04-496c-bc54-f001ba15580a-kube-api-access-kz52d\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009237 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-run-httpd\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-combined-ca-bundle\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009942 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009961 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-log-httpd\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009978 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.009998 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvssf\" (UniqueName: \"kubernetes.io/projected/afbe9acc-8cc5-48b6-9515-61da01b73fcd-kube-api-access-pvssf\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.010022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-config-data\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.014960 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-db-sync-config-data\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.017059 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-combined-ca-bundle\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.033133 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-sx7qd"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.041365 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz52d\" (UniqueName: \"kubernetes.io/projected/0c15a88f-af04-496c-bc54-f001ba15580a-kube-api-access-kz52d\") pod \"barbican-db-sync-hblgk\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.113538 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-config-data\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.113803 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.113935 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-scripts\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.113979 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-scripts\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114028 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114045 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114088 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114109 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-config-data\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114173 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-run-httpd\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114201 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vffx\" (UniqueName: \"kubernetes.io/projected/cbc6223b-76c7-40be-8245-81263bc7c6c6-kube-api-access-9vffx\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qv7k\" (UniqueName: \"kubernetes.io/projected/9a1a6b8e-dff3-4107-8742-33a404b1b737-kube-api-access-8qv7k\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114283 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc6223b-76c7-40be-8245-81263bc7c6c6-logs\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114382 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-config\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114438 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114500 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-log-httpd\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114531 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-combined-ca-bundle\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.114595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvssf\" (UniqueName: \"kubernetes.io/projected/afbe9acc-8cc5-48b6-9515-61da01b73fcd-kube-api-access-pvssf\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.115801 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-run-httpd\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.119468 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-config-data\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.119685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-log-httpd\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.122765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.126736 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.132198 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-scripts\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.145254 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvssf\" (UniqueName: \"kubernetes.io/projected/afbe9acc-8cc5-48b6-9515-61da01b73fcd-kube-api-access-pvssf\") pod \"ceilometer-0\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.183616 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerName="dnsmasq-dns" containerID="cri-o://7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9" gracePeriod=10 Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.220794 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vffx\" (UniqueName: \"kubernetes.io/projected/cbc6223b-76c7-40be-8245-81263bc7c6c6-kube-api-access-9vffx\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.223256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qv7k\" (UniqueName: \"kubernetes.io/projected/9a1a6b8e-dff3-4107-8742-33a404b1b737-kube-api-access-8qv7k\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.223323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc6223b-76c7-40be-8245-81263bc7c6c6-logs\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.223455 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-config\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.223511 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-combined-ca-bundle\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.223745 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.234660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-scripts\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.234776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.234809 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.234870 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.234916 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-config-data\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.231127 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.230906 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-combined-ca-bundle\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.225177 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc6223b-76c7-40be-8245-81263bc7c6c6-logs\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.226075 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-config\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.236083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.236361 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.239296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-config-data\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.239588 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.241484 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-scripts\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.242904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vffx\" (UniqueName: \"kubernetes.io/projected/cbc6223b-76c7-40be-8245-81263bc7c6c6-kube-api-access-9vffx\") pod \"placement-db-sync-f9zrp\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.247571 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qv7k\" (UniqueName: \"kubernetes.io/projected/9a1a6b8e-dff3-4107-8742-33a404b1b737-kube-api-access-8qv7k\") pod \"dnsmasq-dns-68bc8f6695-sx7qd\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.263969 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.268898 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hblgk" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.293258 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9zrp" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.298373 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.345798 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.425255 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mp8nl"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.527062 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-zm6tg"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.625114 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.626638 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.630472 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.631703 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fss9s" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.631883 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.632029 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.635987 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.655117 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zc7qs"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.685113 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.687596 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.690067 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.690997 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.692092 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-logs\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753280 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753303 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-scripts\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753331 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-config-data\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753365 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753393 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753591 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hq47\" (UniqueName: \"kubernetes.io/projected/2947348b-58cb-41d1-847a-04f09a875aed-kube-api-access-4hq47\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.753619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.835734 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.854994 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855044 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-logs\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855066 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-scripts\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855081 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855103 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-config-data\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855124 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855146 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855168 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855193 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855221 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855241 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855259 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855282 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nmh\" (UniqueName: \"kubernetes.io/projected/0929a5a3-8305-49cf-8987-0ae424a47c50-kube-api-access-r7nmh\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855350 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hq47\" (UniqueName: \"kubernetes.io/projected/2947348b-58cb-41d1-847a-04f09a875aed-kube-api-access-4hq47\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855369 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.855667 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.857292 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-logs\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.860829 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.871186 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.873300 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.878759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-config-data\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.889321 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hq47\" (UniqueName: \"kubernetes.io/projected/2947348b-58cb-41d1-847a-04f09a875aed-kube-api-access-4hq47\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.889886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-scripts\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.902231 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.946256 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kwkgq"] Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.955966 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2tsn\" (UniqueName: \"kubernetes.io/projected/eeb8834e-338d-48b9-bc07-de1b41567e79-kube-api-access-h2tsn\") pod \"eeb8834e-338d-48b9-bc07-de1b41567e79\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.956155 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-swift-storage-0\") pod \"eeb8834e-338d-48b9-bc07-de1b41567e79\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.956706 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-sb\") pod \"eeb8834e-338d-48b9-bc07-de1b41567e79\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.956867 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-nb\") pod \"eeb8834e-338d-48b9-bc07-de1b41567e79\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.956956 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-config\") pod \"eeb8834e-338d-48b9-bc07-de1b41567e79\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.957022 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-svc\") pod \"eeb8834e-338d-48b9-bc07-de1b41567e79\" (UID: \"eeb8834e-338d-48b9-bc07-de1b41567e79\") " Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.957291 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.957933 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.958046 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.958129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.958195 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.958257 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.958343 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7nmh\" (UniqueName: \"kubernetes.io/projected/0929a5a3-8305-49cf-8987-0ae424a47c50-kube-api-access-r7nmh\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.959730 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.962752 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.958856 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.962046 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.965929 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.968557 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.971945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.977921 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.983118 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb8834e-338d-48b9-bc07-de1b41567e79-kube-api-access-h2tsn" (OuterVolumeSpecName: "kube-api-access-h2tsn") pod "eeb8834e-338d-48b9-bc07-de1b41567e79" (UID: "eeb8834e-338d-48b9-bc07-de1b41567e79"). InnerVolumeSpecName "kube-api-access-h2tsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:59 crc kubenswrapper[4861]: I0219 13:29:59.991055 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7nmh\" (UniqueName: \"kubernetes.io/projected/0929a5a3-8305-49cf-8987-0ae424a47c50-kube-api-access-r7nmh\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.007997 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.031321 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hblgk"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.034114 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.041671 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eeb8834e-338d-48b9-bc07-de1b41567e79" (UID: "eeb8834e-338d-48b9-bc07-de1b41567e79"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.055953 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-sx7qd"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.064480 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.064504 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2tsn\" (UniqueName: \"kubernetes.io/projected/eeb8834e-338d-48b9-bc07-de1b41567e79-kube-api-access-h2tsn\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.064753 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eeb8834e-338d-48b9-bc07-de1b41567e79" (UID: "eeb8834e-338d-48b9-bc07-de1b41567e79"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.064990 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f9zrp"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.095178 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eeb8834e-338d-48b9-bc07-de1b41567e79" (UID: "eeb8834e-338d-48b9-bc07-de1b41567e79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.101404 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-config" (OuterVolumeSpecName: "config") pod "eeb8834e-338d-48b9-bc07-de1b41567e79" (UID: "eeb8834e-338d-48b9-bc07-de1b41567e79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.114516 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eeb8834e-338d-48b9-bc07-de1b41567e79" (UID: "eeb8834e-338d-48b9-bc07-de1b41567e79"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.135653 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2"] Feb 19 13:30:00 crc kubenswrapper[4861]: E0219 13:30:00.136194 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerName="dnsmasq-dns" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.136205 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerName="dnsmasq-dns" Feb 19 13:30:00 crc kubenswrapper[4861]: E0219 13:30:00.136232 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerName="init" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.136238 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerName="init" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.136393 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerName="dnsmasq-dns" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.136890 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.141643 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.142281 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.166649 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.166686 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.166702 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.166718 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8834e-338d-48b9-bc07-de1b41567e79-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.182677 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.208742 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.221946 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" event={"ID":"9a1a6b8e-dff3-4107-8742-33a404b1b737","Type":"ContainerStarted","Data":"cebadd93439d33aa97930921e35382236b63cd6cd6c05a21fef5f9718b2f9131"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.223381 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" event={"ID":"18a121cb-1f5d-4335-84c1-783b4ef39908","Type":"ContainerStarted","Data":"ed253047c419e179d27a6bf2bfd9120f84734f3ae1cf0adc75eed905bda76580"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.223451 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" event={"ID":"18a121cb-1f5d-4335-84c1-783b4ef39908","Type":"ContainerStarted","Data":"0e779c3513cec7f61077a17a4c4ce1d1c6cce004473bfcc587abd9ec56b35ccb"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.224289 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9zrp" event={"ID":"cbc6223b-76c7-40be-8245-81263bc7c6c6","Type":"ContainerStarted","Data":"850be74f0634cd5da1d0c8b94a74fdffe31b487ad90ab1bfb18692c8afa801ce"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.226409 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mp8nl" event={"ID":"bf9e804e-7f05-4152-b4ef-856337ebb9a7","Type":"ContainerStarted","Data":"ab5edbd988a9350d5da5aa253135c1b177e87f0dfa94d8d0fc58f14a00fba7e0"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.226452 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mp8nl" event={"ID":"bf9e804e-7f05-4152-b4ef-856337ebb9a7","Type":"ContainerStarted","Data":"cbcfaf5ba1b0567a52014f7d3acdbd54c325964a79f624b30a98bb800a734b66"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.228098 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hblgk" event={"ID":"0c15a88f-af04-496c-bc54-f001ba15580a","Type":"ContainerStarted","Data":"2fb81f110a8a54bc87860721621fe821a159ddf2ed03a16a9ed4db11e038e339"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.230370 4861 generic.go:334] "Generic (PLEG): container finished" podID="eeb8834e-338d-48b9-bc07-de1b41567e79" containerID="7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9" exitCode=0 Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.230522 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" event={"ID":"eeb8834e-338d-48b9-bc07-de1b41567e79","Type":"ContainerDied","Data":"7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.230613 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" event={"ID":"eeb8834e-338d-48b9-bc07-de1b41567e79","Type":"ContainerDied","Data":"ffafa34521de3b5a14da709293d948e7b5468321866dbc1d430c7fa05e55b57c"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.230688 4861 scope.go:117] "RemoveContainer" containerID="7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.230575 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-dh6nd" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.232295 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kwkgq" event={"ID":"48707538-eeb6-42d9-918f-6b22a07cae71","Type":"ContainerStarted","Data":"9cbc88edcdf56e4495d7e3c93417166fc35b0627e43510b726cb33e1bf7a017c"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.237945 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zc7qs" event={"ID":"cfab12c1-cdb5-415f-8290-4d057a940b1a","Type":"ContainerStarted","Data":"35fe176f399f7d66e30e462ef4e30ddebba4b1298804b0134aa5d6226f0408e7"} Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.262034 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mp8nl" podStartSLOduration=2.262014703 podStartE2EDuration="2.262014703s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:00.244150512 +0000 UTC m=+1214.905253740" watchObservedRunningTime="2026-02-19 13:30:00.262014703 +0000 UTC m=+1214.923117931" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.299940 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-dh6nd"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.303156 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a66507-2e1b-457e-bdd2-64fa5283fea9-secret-volume\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.303223 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7mw\" (UniqueName: \"kubernetes.io/projected/27a66507-2e1b-457e-bdd2-64fa5283fea9-kube-api-access-4g7mw\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.303393 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a66507-2e1b-457e-bdd2-64fa5283fea9-config-volume\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.304490 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.305211 4861 scope.go:117] "RemoveContainer" containerID="e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.315559 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-dh6nd"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.370533 4861 scope.go:117] "RemoveContainer" containerID="7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9" Feb 19 13:30:00 crc kubenswrapper[4861]: E0219 13:30:00.371464 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9\": container with ID starting with 7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9 not found: ID does not exist" containerID="7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.371512 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9"} err="failed to get container status \"7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9\": rpc error: code = NotFound desc = could not find container \"7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9\": container with ID starting with 7620ae7a1fd88d913a93303b9c5c3f82693181fb2df441e26a7790b3f11fabf9 not found: ID does not exist" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.371540 4861 scope.go:117] "RemoveContainer" containerID="e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875" Feb 19 13:30:00 crc kubenswrapper[4861]: E0219 13:30:00.371815 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875\": container with ID starting with e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875 not found: ID does not exist" containerID="e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.371833 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875"} err="failed to get container status \"e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875\": rpc error: code = NotFound desc = could not find container \"e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875\": container with ID starting with e2dd2e88f970044b85b2bcf8f8181a0268a3e52b689ba34a9b7d569fe6c60875 not found: ID does not exist" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.405115 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a66507-2e1b-457e-bdd2-64fa5283fea9-config-volume\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.405382 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a66507-2e1b-457e-bdd2-64fa5283fea9-secret-volume\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.405412 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g7mw\" (UniqueName: \"kubernetes.io/projected/27a66507-2e1b-457e-bdd2-64fa5283fea9-kube-api-access-4g7mw\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.405981 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a66507-2e1b-457e-bdd2-64fa5283fea9-config-volume\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.411355 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a66507-2e1b-457e-bdd2-64fa5283fea9-secret-volume\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.429906 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g7mw\" (UniqueName: \"kubernetes.io/projected/27a66507-2e1b-457e-bdd2-64fa5283fea9-kube-api-access-4g7mw\") pod \"collect-profiles-29525130-2k7g2\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.459997 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.536239 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:00 crc kubenswrapper[4861]: I0219 13:30:00.995863 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.023228 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2"] Feb 19 13:30:01 crc kubenswrapper[4861]: W0219 13:30:01.108839 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27a66507_2e1b_457e_bdd2_64fa5283fea9.slice/crio-bb76d2e33fef929f0fbc180a821b8197d96c2a9035940996d4a4463007a448a8 WatchSource:0}: Error finding container bb76d2e33fef929f0fbc180a821b8197d96c2a9035940996d4a4463007a448a8: Status 404 returned error can't find the container with id bb76d2e33fef929f0fbc180a821b8197d96c2a9035940996d4a4463007a448a8 Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.278060 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kwkgq" event={"ID":"48707538-eeb6-42d9-918f-6b22a07cae71","Type":"ContainerStarted","Data":"8b91b15fff839ba5df5868715813fffaafecb640a519d3a5cb4a17ec2708a678"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.280736 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2947348b-58cb-41d1-847a-04f09a875aed","Type":"ContainerStarted","Data":"72d81ebc0956c519c717b9e6a7edb4f1dac46ba5946ce5ed89ec01418ed98c2c"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.284186 4861 generic.go:334] "Generic (PLEG): container finished" podID="18a121cb-1f5d-4335-84c1-783b4ef39908" containerID="ed253047c419e179d27a6bf2bfd9120f84734f3ae1cf0adc75eed905bda76580" exitCode=0 Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.284353 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" event={"ID":"18a121cb-1f5d-4335-84c1-783b4ef39908","Type":"ContainerDied","Data":"ed253047c419e179d27a6bf2bfd9120f84734f3ae1cf0adc75eed905bda76580"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.296493 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerStarted","Data":"e87bd16e408a91639af6f8a779565eea69e2e46eb7855b4596226435cad594cc"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.297695 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0929a5a3-8305-49cf-8987-0ae424a47c50","Type":"ContainerStarted","Data":"5add6c6747ef2adf1b0e1c63f9efbb8033929ceb41858e7ecb07b32df5f8b893"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.299065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" event={"ID":"27a66507-2e1b-457e-bdd2-64fa5283fea9","Type":"ContainerStarted","Data":"bb76d2e33fef929f0fbc180a821b8197d96c2a9035940996d4a4463007a448a8"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.304267 4861 generic.go:334] "Generic (PLEG): container finished" podID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerID="45531f344cc2dfeacacd52d4bea2758eac9c1bdd2dd3bef282ac64096bfe5f59" exitCode=0 Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.304384 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kwkgq" podStartSLOduration=3.304373434 podStartE2EDuration="3.304373434s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:01.303759177 +0000 UTC m=+1215.964862425" watchObservedRunningTime="2026-02-19 13:30:01.304373434 +0000 UTC m=+1215.965476662" Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.305594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" event={"ID":"9a1a6b8e-dff3-4107-8742-33a404b1b737","Type":"ContainerDied","Data":"45531f344cc2dfeacacd52d4bea2758eac9c1bdd2dd3bef282ac64096bfe5f59"} Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.542252 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.649500 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.681860 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.938858 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.957539 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmlk4\" (UniqueName: \"kubernetes.io/projected/18a121cb-1f5d-4335-84c1-783b4ef39908-kube-api-access-bmlk4\") pod \"18a121cb-1f5d-4335-84c1-783b4ef39908\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.957692 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-sb\") pod \"18a121cb-1f5d-4335-84c1-783b4ef39908\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.957727 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-nb\") pod \"18a121cb-1f5d-4335-84c1-783b4ef39908\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.957756 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-svc\") pod \"18a121cb-1f5d-4335-84c1-783b4ef39908\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.957819 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-config\") pod \"18a121cb-1f5d-4335-84c1-783b4ef39908\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.957840 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-swift-storage-0\") pod \"18a121cb-1f5d-4335-84c1-783b4ef39908\" (UID: \"18a121cb-1f5d-4335-84c1-783b4ef39908\") " Feb 19 13:30:01 crc kubenswrapper[4861]: I0219 13:30:01.990503 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a121cb-1f5d-4335-84c1-783b4ef39908-kube-api-access-bmlk4" (OuterVolumeSpecName: "kube-api-access-bmlk4") pod "18a121cb-1f5d-4335-84c1-783b4ef39908" (UID: "18a121cb-1f5d-4335-84c1-783b4ef39908"). InnerVolumeSpecName "kube-api-access-bmlk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.000396 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb8834e-338d-48b9-bc07-de1b41567e79" path="/var/lib/kubelet/pods/eeb8834e-338d-48b9-bc07-de1b41567e79/volumes" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.027678 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "18a121cb-1f5d-4335-84c1-783b4ef39908" (UID: "18a121cb-1f5d-4335-84c1-783b4ef39908"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.037133 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18a121cb-1f5d-4335-84c1-783b4ef39908" (UID: "18a121cb-1f5d-4335-84c1-783b4ef39908"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.044925 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-config" (OuterVolumeSpecName: "config") pod "18a121cb-1f5d-4335-84c1-783b4ef39908" (UID: "18a121cb-1f5d-4335-84c1-783b4ef39908"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.052620 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18a121cb-1f5d-4335-84c1-783b4ef39908" (UID: "18a121cb-1f5d-4335-84c1-783b4ef39908"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.055902 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18a121cb-1f5d-4335-84c1-783b4ef39908" (UID: "18a121cb-1f5d-4335-84c1-783b4ef39908"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.059967 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.059989 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.059999 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.060011 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.060021 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmlk4\" (UniqueName: \"kubernetes.io/projected/18a121cb-1f5d-4335-84c1-783b4ef39908-kube-api-access-bmlk4\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.060029 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18a121cb-1f5d-4335-84c1-783b4ef39908-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.326378 4861 generic.go:334] "Generic (PLEG): container finished" podID="27a66507-2e1b-457e-bdd2-64fa5283fea9" containerID="e97b0209ee18d6842151927b843f53ff2ef0e5d973fad92a6dee7be3cc1d3157" exitCode=0 Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.326496 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" event={"ID":"27a66507-2e1b-457e-bdd2-64fa5283fea9","Type":"ContainerDied","Data":"e97b0209ee18d6842151927b843f53ff2ef0e5d973fad92a6dee7be3cc1d3157"} Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.345922 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" event={"ID":"9a1a6b8e-dff3-4107-8742-33a404b1b737","Type":"ContainerStarted","Data":"1568dacd1144b4f34ab30e12d3fb890b1f9d1c56ab77ff2a6316b084d0bb7730"} Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.346308 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.348735 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2947348b-58cb-41d1-847a-04f09a875aed","Type":"ContainerStarted","Data":"9d8c0c8c033da35a3ec5c32b901fdd2c3352e056bc7f45dfe21bdd1628d6de3e"} Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.352284 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" event={"ID":"18a121cb-1f5d-4335-84c1-783b4ef39908","Type":"ContainerDied","Data":"0e779c3513cec7f61077a17a4c4ce1d1c6cce004473bfcc587abd9ec56b35ccb"} Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.352327 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-zm6tg" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.352334 4861 scope.go:117] "RemoveContainer" containerID="ed253047c419e179d27a6bf2bfd9120f84734f3ae1cf0adc75eed905bda76580" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.377990 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" podStartSLOduration=4.377957045 podStartE2EDuration="4.377957045s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:02.369511588 +0000 UTC m=+1217.030614826" watchObservedRunningTime="2026-02-19 13:30:02.377957045 +0000 UTC m=+1217.039060273" Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.437120 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-zm6tg"] Feb 19 13:30:02 crc kubenswrapper[4861]: I0219 13:30:02.446844 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-zm6tg"] Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.362995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2947348b-58cb-41d1-847a-04f09a875aed","Type":"ContainerStarted","Data":"18c64e3568d9413fa58baa0a3b38b4802149203fd15671884b955ce7502af542"} Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.363815 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-log" containerID="cri-o://9d8c0c8c033da35a3ec5c32b901fdd2c3352e056bc7f45dfe21bdd1628d6de3e" gracePeriod=30 Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.364185 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-httpd" containerID="cri-o://18c64e3568d9413fa58baa0a3b38b4802149203fd15671884b955ce7502af542" gracePeriod=30 Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.371787 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-log" containerID="cri-o://4a26d47a629b1ad34a091456cfa81e91753b98fe3db2b6482b15762b37b94603" gracePeriod=30 Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.371887 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0929a5a3-8305-49cf-8987-0ae424a47c50","Type":"ContainerStarted","Data":"4a26d47a629b1ad34a091456cfa81e91753b98fe3db2b6482b15762b37b94603"} Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.371908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0929a5a3-8305-49cf-8987-0ae424a47c50","Type":"ContainerStarted","Data":"4daade5a46bfba83210b60e57eff2d880de97fdd1de80d46282f969fdff86d5d"} Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.372796 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-httpd" containerID="cri-o://4daade5a46bfba83210b60e57eff2d880de97fdd1de80d46282f969fdff86d5d" gracePeriod=30 Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.390278 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.390258927 podStartE2EDuration="5.390258927s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:03.384024388 +0000 UTC m=+1218.045127616" watchObservedRunningTime="2026-02-19 13:30:03.390258927 +0000 UTC m=+1218.051362155" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.417141 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.41711582 podStartE2EDuration="5.41711582s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:03.410806041 +0000 UTC m=+1218.071909289" watchObservedRunningTime="2026-02-19 13:30:03.41711582 +0000 UTC m=+1218.078219048" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.833904 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.834219 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.853111 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.914116 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a66507-2e1b-457e-bdd2-64fa5283fea9-secret-volume\") pod \"27a66507-2e1b-457e-bdd2-64fa5283fea9\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.914247 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a66507-2e1b-457e-bdd2-64fa5283fea9-config-volume\") pod \"27a66507-2e1b-457e-bdd2-64fa5283fea9\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.914335 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g7mw\" (UniqueName: \"kubernetes.io/projected/27a66507-2e1b-457e-bdd2-64fa5283fea9-kube-api-access-4g7mw\") pod \"27a66507-2e1b-457e-bdd2-64fa5283fea9\" (UID: \"27a66507-2e1b-457e-bdd2-64fa5283fea9\") " Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.915094 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a66507-2e1b-457e-bdd2-64fa5283fea9-config-volume" (OuterVolumeSpecName: "config-volume") pod "27a66507-2e1b-457e-bdd2-64fa5283fea9" (UID: "27a66507-2e1b-457e-bdd2-64fa5283fea9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.923583 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a66507-2e1b-457e-bdd2-64fa5283fea9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27a66507-2e1b-457e-bdd2-64fa5283fea9" (UID: "27a66507-2e1b-457e-bdd2-64fa5283fea9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.933692 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a66507-2e1b-457e-bdd2-64fa5283fea9-kube-api-access-4g7mw" (OuterVolumeSpecName: "kube-api-access-4g7mw") pod "27a66507-2e1b-457e-bdd2-64fa5283fea9" (UID: "27a66507-2e1b-457e-bdd2-64fa5283fea9"). InnerVolumeSpecName "kube-api-access-4g7mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:03 crc kubenswrapper[4861]: I0219 13:30:03.990812 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a121cb-1f5d-4335-84c1-783b4ef39908" path="/var/lib/kubelet/pods/18a121cb-1f5d-4335-84c1-783b4ef39908/volumes" Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.016723 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a66507-2e1b-457e-bdd2-64fa5283fea9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.016758 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g7mw\" (UniqueName: \"kubernetes.io/projected/27a66507-2e1b-457e-bdd2-64fa5283fea9-kube-api-access-4g7mw\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.016770 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a66507-2e1b-457e-bdd2-64fa5283fea9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.384971 4861 generic.go:334] "Generic (PLEG): container finished" podID="2947348b-58cb-41d1-847a-04f09a875aed" containerID="18c64e3568d9413fa58baa0a3b38b4802149203fd15671884b955ce7502af542" exitCode=0 Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.385204 4861 generic.go:334] "Generic (PLEG): container finished" podID="2947348b-58cb-41d1-847a-04f09a875aed" containerID="9d8c0c8c033da35a3ec5c32b901fdd2c3352e056bc7f45dfe21bdd1628d6de3e" exitCode=143 Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.385161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2947348b-58cb-41d1-847a-04f09a875aed","Type":"ContainerDied","Data":"18c64e3568d9413fa58baa0a3b38b4802149203fd15671884b955ce7502af542"} Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.385275 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2947348b-58cb-41d1-847a-04f09a875aed","Type":"ContainerDied","Data":"9d8c0c8c033da35a3ec5c32b901fdd2c3352e056bc7f45dfe21bdd1628d6de3e"} Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.390008 4861 generic.go:334] "Generic (PLEG): container finished" podID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerID="4daade5a46bfba83210b60e57eff2d880de97fdd1de80d46282f969fdff86d5d" exitCode=143 Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.390056 4861 generic.go:334] "Generic (PLEG): container finished" podID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerID="4a26d47a629b1ad34a091456cfa81e91753b98fe3db2b6482b15762b37b94603" exitCode=143 Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.390074 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0929a5a3-8305-49cf-8987-0ae424a47c50","Type":"ContainerDied","Data":"4daade5a46bfba83210b60e57eff2d880de97fdd1de80d46282f969fdff86d5d"} Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.390110 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0929a5a3-8305-49cf-8987-0ae424a47c50","Type":"ContainerDied","Data":"4a26d47a629b1ad34a091456cfa81e91753b98fe3db2b6482b15762b37b94603"} Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.391827 4861 generic.go:334] "Generic (PLEG): container finished" podID="bf9e804e-7f05-4152-b4ef-856337ebb9a7" containerID="ab5edbd988a9350d5da5aa253135c1b177e87f0dfa94d8d0fc58f14a00fba7e0" exitCode=0 Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.391877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mp8nl" event={"ID":"bf9e804e-7f05-4152-b4ef-856337ebb9a7","Type":"ContainerDied","Data":"ab5edbd988a9350d5da5aa253135c1b177e87f0dfa94d8d0fc58f14a00fba7e0"} Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.394435 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" event={"ID":"27a66507-2e1b-457e-bdd2-64fa5283fea9","Type":"ContainerDied","Data":"bb76d2e33fef929f0fbc180a821b8197d96c2a9035940996d4a4463007a448a8"} Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.394456 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb76d2e33fef929f0fbc180a821b8197d96c2a9035940996d4a4463007a448a8" Feb 19 13:30:04 crc kubenswrapper[4861]: I0219 13:30:04.394497 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2" Feb 19 13:30:07 crc kubenswrapper[4861]: I0219 13:30:07.964860 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.004164 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7nmh\" (UniqueName: \"kubernetes.io/projected/0929a5a3-8305-49cf-8987-0ae424a47c50-kube-api-access-r7nmh\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.004513 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-scripts\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.004686 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-httpd-run\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.004813 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-combined-ca-bundle\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.004943 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.005030 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-logs\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.005127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-config-data\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.005295 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-internal-tls-certs\") pod \"0929a5a3-8305-49cf-8987-0ae424a47c50\" (UID: \"0929a5a3-8305-49cf-8987-0ae424a47c50\") " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.007349 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-logs" (OuterVolumeSpecName: "logs") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.010582 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.013315 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0929a5a3-8305-49cf-8987-0ae424a47c50-kube-api-access-r7nmh" (OuterVolumeSpecName: "kube-api-access-r7nmh") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "kube-api-access-r7nmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.016370 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-scripts" (OuterVolumeSpecName: "scripts") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.016800 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.042677 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.065404 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.077664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-config-data" (OuterVolumeSpecName: "config-data") pod "0929a5a3-8305-49cf-8987-0ae424a47c50" (UID: "0929a5a3-8305-49cf-8987-0ae424a47c50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110486 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110527 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110536 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110544 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110557 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7nmh\" (UniqueName: \"kubernetes.io/projected/0929a5a3-8305-49cf-8987-0ae424a47c50-kube-api-access-r7nmh\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110564 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110572 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0929a5a3-8305-49cf-8987-0ae424a47c50-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.110580 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0929a5a3-8305-49cf-8987-0ae424a47c50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.134274 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.218264 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.449048 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0929a5a3-8305-49cf-8987-0ae424a47c50","Type":"ContainerDied","Data":"5add6c6747ef2adf1b0e1c63f9efbb8033929ceb41858e7ecb07b32df5f8b893"} Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.449125 4861 scope.go:117] "RemoveContainer" containerID="4daade5a46bfba83210b60e57eff2d880de97fdd1de80d46282f969fdff86d5d" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.449138 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.525597 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.570374 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.570472 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:08 crc kubenswrapper[4861]: E0219 13:30:08.570902 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a66507-2e1b-457e-bdd2-64fa5283fea9" containerName="collect-profiles" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.570925 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a66507-2e1b-457e-bdd2-64fa5283fea9" containerName="collect-profiles" Feb 19 13:30:08 crc kubenswrapper[4861]: E0219 13:30:08.570940 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a121cb-1f5d-4335-84c1-783b4ef39908" containerName="init" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.570948 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a121cb-1f5d-4335-84c1-783b4ef39908" containerName="init" Feb 19 13:30:08 crc kubenswrapper[4861]: E0219 13:30:08.570967 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-log" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.570975 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-log" Feb 19 13:30:08 crc kubenswrapper[4861]: E0219 13:30:08.570991 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-httpd" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.570998 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-httpd" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.571215 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a121cb-1f5d-4335-84c1-783b4ef39908" containerName="init" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.571232 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a66507-2e1b-457e-bdd2-64fa5283fea9" containerName="collect-profiles" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.571248 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-httpd" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.571268 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" containerName="glance-log" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.587579 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.587785 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.589765 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.591697 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.630961 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-logs\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631035 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631115 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631172 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631203 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggb7\" (UniqueName: \"kubernetes.io/projected/e69db72c-2c51-47ba-b2a1-037d6b259176-kube-api-access-7ggb7\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631254 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631279 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.631321 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732531 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732606 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggb7\" (UniqueName: \"kubernetes.io/projected/e69db72c-2c51-47ba-b2a1-037d6b259176-kube-api-access-7ggb7\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732666 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732694 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732738 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732773 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-logs\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.732861 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.733215 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.734089 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-logs\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.734165 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.742302 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.742388 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.742806 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.746057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.751642 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggb7\" (UniqueName: \"kubernetes.io/projected/e69db72c-2c51-47ba-b2a1-037d6b259176-kube-api-access-7ggb7\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.764518 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:30:08 crc kubenswrapper[4861]: I0219 13:30:08.917166 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:09 crc kubenswrapper[4861]: I0219 13:30:09.347822 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:30:09 crc kubenswrapper[4861]: I0219 13:30:09.426958 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-bjwqj"] Feb 19 13:30:09 crc kubenswrapper[4861]: I0219 13:30:09.429239 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="dnsmasq-dns" containerID="cri-o://5cd57e65f88464752a92643ee3e7cc60742238c5855798538a438e99542da273" gracePeriod=10 Feb 19 13:30:09 crc kubenswrapper[4861]: I0219 13:30:09.991969 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0929a5a3-8305-49cf-8987-0ae424a47c50" path="/var/lib/kubelet/pods/0929a5a3-8305-49cf-8987-0ae424a47c50/volumes" Feb 19 13:30:10 crc kubenswrapper[4861]: I0219 13:30:10.480635 4861 generic.go:334] "Generic (PLEG): container finished" podID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerID="5cd57e65f88464752a92643ee3e7cc60742238c5855798538a438e99542da273" exitCode=0 Feb 19 13:30:10 crc kubenswrapper[4861]: I0219 13:30:10.480697 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" event={"ID":"638f49e9-45bb-4106-a3bc-a53a23fbc313","Type":"ContainerDied","Data":"5cd57e65f88464752a92643ee3e7cc60742238c5855798538a438e99542da273"} Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.358930 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.373718 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.382573 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-fernet-keys\") pod \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.382648 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-scripts\") pod \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.382735 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-config-data\") pod \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.382816 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-credential-keys\") pod \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.382856 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-combined-ca-bundle\") pod \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.382916 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c962\" (UniqueName: \"kubernetes.io/projected/bf9e804e-7f05-4152-b4ef-856337ebb9a7-kube-api-access-5c962\") pod \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\" (UID: \"bf9e804e-7f05-4152-b4ef-856337ebb9a7\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.392302 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bf9e804e-7f05-4152-b4ef-856337ebb9a7" (UID: "bf9e804e-7f05-4152-b4ef-856337ebb9a7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.392347 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-scripts" (OuterVolumeSpecName: "scripts") pod "bf9e804e-7f05-4152-b4ef-856337ebb9a7" (UID: "bf9e804e-7f05-4152-b4ef-856337ebb9a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.395536 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9e804e-7f05-4152-b4ef-856337ebb9a7-kube-api-access-5c962" (OuterVolumeSpecName: "kube-api-access-5c962") pod "bf9e804e-7f05-4152-b4ef-856337ebb9a7" (UID: "bf9e804e-7f05-4152-b4ef-856337ebb9a7"). InnerVolumeSpecName "kube-api-access-5c962". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.398574 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bf9e804e-7f05-4152-b4ef-856337ebb9a7" (UID: "bf9e804e-7f05-4152-b4ef-856337ebb9a7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.435041 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-config-data" (OuterVolumeSpecName: "config-data") pod "bf9e804e-7f05-4152-b4ef-856337ebb9a7" (UID: "bf9e804e-7f05-4152-b4ef-856337ebb9a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.438734 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf9e804e-7f05-4152-b4ef-856337ebb9a7" (UID: "bf9e804e-7f05-4152-b4ef-856337ebb9a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485049 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-config-data\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485130 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-public-tls-certs\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485171 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485214 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hq47\" (UniqueName: \"kubernetes.io/projected/2947348b-58cb-41d1-847a-04f09a875aed-kube-api-access-4hq47\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485253 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-scripts\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485288 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-combined-ca-bundle\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485354 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-httpd-run\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485374 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-logs\") pod \"2947348b-58cb-41d1-847a-04f09a875aed\" (UID: \"2947348b-58cb-41d1-847a-04f09a875aed\") " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485866 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.485939 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-logs" (OuterVolumeSpecName: "logs") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486483 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486512 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486525 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c962\" (UniqueName: \"kubernetes.io/projected/bf9e804e-7f05-4152-b4ef-856337ebb9a7-kube-api-access-5c962\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486538 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486550 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2947348b-58cb-41d1-847a-04f09a875aed-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486559 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486570 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.486579 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9e804e-7f05-4152-b4ef-856337ebb9a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.489965 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2947348b-58cb-41d1-847a-04f09a875aed-kube-api-access-4hq47" (OuterVolumeSpecName: "kube-api-access-4hq47") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "kube-api-access-4hq47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.490937 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.494577 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mp8nl" event={"ID":"bf9e804e-7f05-4152-b4ef-856337ebb9a7","Type":"ContainerDied","Data":"cbcfaf5ba1b0567a52014f7d3acdbd54c325964a79f624b30a98bb800a734b66"} Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.494616 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbcfaf5ba1b0567a52014f7d3acdbd54c325964a79f624b30a98bb800a734b66" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.494773 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mp8nl" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.498688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2947348b-58cb-41d1-847a-04f09a875aed","Type":"ContainerDied","Data":"72d81ebc0956c519c717b9e6a7edb4f1dac46ba5946ce5ed89ec01418ed98c2c"} Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.498757 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.502095 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-scripts" (OuterVolumeSpecName: "scripts") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.517301 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.550467 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.560537 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-config-data" (OuterVolumeSpecName: "config-data") pod "2947348b-58cb-41d1-847a-04f09a875aed" (UID: "2947348b-58cb-41d1-847a-04f09a875aed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.588126 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hq47\" (UniqueName: \"kubernetes.io/projected/2947348b-58cb-41d1-847a-04f09a875aed-kube-api-access-4hq47\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.588161 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.588174 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.588183 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.588195 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2947348b-58cb-41d1-847a-04f09a875aed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.588232 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.630147 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.690273 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.848701 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.869380 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.879744 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:11 crc kubenswrapper[4861]: E0219 13:30:11.880494 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-log" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.880528 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-log" Feb 19 13:30:11 crc kubenswrapper[4861]: E0219 13:30:11.880558 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9e804e-7f05-4152-b4ef-856337ebb9a7" containerName="keystone-bootstrap" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.880571 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9e804e-7f05-4152-b4ef-856337ebb9a7" containerName="keystone-bootstrap" Feb 19 13:30:11 crc kubenswrapper[4861]: E0219 13:30:11.880623 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-httpd" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.880637 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-httpd" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.880950 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-log" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.880994 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9e804e-7f05-4152-b4ef-856337ebb9a7" containerName="keystone-bootstrap" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.881023 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2947348b-58cb-41d1-847a-04f09a875aed" containerName="glance-httpd" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.883141 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.885968 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.886239 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.889293 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.994788 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2947348b-58cb-41d1-847a-04f09a875aed" path="/var/lib/kubelet/pods/2947348b-58cb-41d1-847a-04f09a875aed/volumes" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.999826 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.999920 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.999960 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-logs\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:11 crc kubenswrapper[4861]: I0219 13:30:11.999987 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.000065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.000099 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7jx4\" (UniqueName: \"kubernetes.io/projected/3bebc25b-fd66-4fca-9a39-d54671b7492d-kube-api-access-b7jx4\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.000127 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.000154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.100951 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-logs\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.100997 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101045 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7jx4\" (UniqueName: \"kubernetes.io/projected/3bebc25b-fd66-4fca-9a39-d54671b7492d-kube-api-access-b7jx4\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101093 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101113 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101918 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-logs\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.101989 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.103086 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.108380 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.108598 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.110844 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.119257 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.122847 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7jx4\" (UniqueName: \"kubernetes.io/projected/3bebc25b-fd66-4fca-9a39-d54671b7492d-kube-api-access-b7jx4\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.149491 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.216224 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.456376 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mp8nl"] Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.465414 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mp8nl"] Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.563029 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wlwdw"] Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.564906 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.567673 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.568164 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bsjbt" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.568336 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.569862 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.573833 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.576921 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wlwdw"] Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.615736 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmvp\" (UniqueName: \"kubernetes.io/projected/f07235b0-c2aa-4225-93e7-a408f7317082-kube-api-access-2nmvp\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.615950 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-scripts\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.616120 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-config-data\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.616305 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-combined-ca-bundle\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.616432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-credential-keys\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.616621 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-fernet-keys\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.719535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-config-data\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.719675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-combined-ca-bundle\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.719750 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-credential-keys\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.719791 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-fernet-keys\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.719847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nmvp\" (UniqueName: \"kubernetes.io/projected/f07235b0-c2aa-4225-93e7-a408f7317082-kube-api-access-2nmvp\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.719875 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-scripts\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.724853 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-combined-ca-bundle\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.725219 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-scripts\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.728888 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-config-data\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.732207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-fernet-keys\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.740017 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-credential-keys\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.742405 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nmvp\" (UniqueName: \"kubernetes.io/projected/f07235b0-c2aa-4225-93e7-a408f7317082-kube-api-access-2nmvp\") pod \"keystone-bootstrap-wlwdw\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:12 crc kubenswrapper[4861]: I0219 13:30:12.890486 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:13 crc kubenswrapper[4861]: I0219 13:30:13.993528 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9e804e-7f05-4152-b4ef-856337ebb9a7" path="/var/lib/kubelet/pods/bf9e804e-7f05-4152-b4ef-856337ebb9a7/volumes" Feb 19 13:30:17 crc kubenswrapper[4861]: I0219 13:30:17.368945 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.307048 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.356848 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-config\") pod \"638f49e9-45bb-4106-a3bc-a53a23fbc313\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.357115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9gxk\" (UniqueName: \"kubernetes.io/projected/638f49e9-45bb-4106-a3bc-a53a23fbc313-kube-api-access-f9gxk\") pod \"638f49e9-45bb-4106-a3bc-a53a23fbc313\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.357241 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-dns-svc\") pod \"638f49e9-45bb-4106-a3bc-a53a23fbc313\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.357323 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-nb\") pod \"638f49e9-45bb-4106-a3bc-a53a23fbc313\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.357409 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-sb\") pod \"638f49e9-45bb-4106-a3bc-a53a23fbc313\" (UID: \"638f49e9-45bb-4106-a3bc-a53a23fbc313\") " Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.372536 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638f49e9-45bb-4106-a3bc-a53a23fbc313-kube-api-access-f9gxk" (OuterVolumeSpecName: "kube-api-access-f9gxk") pod "638f49e9-45bb-4106-a3bc-a53a23fbc313" (UID: "638f49e9-45bb-4106-a3bc-a53a23fbc313"). InnerVolumeSpecName "kube-api-access-f9gxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.429514 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-config" (OuterVolumeSpecName: "config") pod "638f49e9-45bb-4106-a3bc-a53a23fbc313" (UID: "638f49e9-45bb-4106-a3bc-a53a23fbc313"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.430544 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "638f49e9-45bb-4106-a3bc-a53a23fbc313" (UID: "638f49e9-45bb-4106-a3bc-a53a23fbc313"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.431995 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "638f49e9-45bb-4106-a3bc-a53a23fbc313" (UID: "638f49e9-45bb-4106-a3bc-a53a23fbc313"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.442448 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "638f49e9-45bb-4106-a3bc-a53a23fbc313" (UID: "638f49e9-45bb-4106-a3bc-a53a23fbc313"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.459481 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9gxk\" (UniqueName: \"kubernetes.io/projected/638f49e9-45bb-4106-a3bc-a53a23fbc313-kube-api-access-f9gxk\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.459524 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.459538 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.459550 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.459565 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f49e9-45bb-4106-a3bc-a53a23fbc313-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.577213 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" event={"ID":"638f49e9-45bb-4106-a3bc-a53a23fbc313","Type":"ContainerDied","Data":"d46d274b3dbdb10e32d9cf12e7ec50461a77d2332244892a0643fd6089bbeedd"} Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.577291 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.627186 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-bjwqj"] Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.637750 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-bjwqj"] Feb 19 13:30:19 crc kubenswrapper[4861]: I0219 13:30:19.995694 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" path="/var/lib/kubelet/pods/638f49e9-45bb-4106-a3bc-a53a23fbc313/volumes" Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.340599 4861 scope.go:117] "RemoveContainer" containerID="4a26d47a629b1ad34a091456cfa81e91753b98fe3db2b6482b15762b37b94603" Feb 19 13:30:20 crc kubenswrapper[4861]: E0219 13:30:20.369827 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" Feb 19 13:30:20 crc kubenswrapper[4861]: E0219 13:30:20.370135 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qphzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-zc7qs_openstack(cfab12c1-cdb5-415f-8290-4d057a940b1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:30:20 crc kubenswrapper[4861]: E0219 13:30:20.371244 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-zc7qs" podUID="cfab12c1-cdb5-415f-8290-4d057a940b1a" Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.607521 4861 scope.go:117] "RemoveContainer" containerID="18c64e3568d9413fa58baa0a3b38b4802149203fd15671884b955ce7502af542" Feb 19 13:30:20 crc kubenswrapper[4861]: E0219 13:30:20.618870 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b\\\"\"" pod="openstack/cinder-db-sync-zc7qs" podUID="cfab12c1-cdb5-415f-8290-4d057a940b1a" Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.643668 4861 scope.go:117] "RemoveContainer" containerID="9d8c0c8c033da35a3ec5c32b901fdd2c3352e056bc7f45dfe21bdd1628d6de3e" Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.674522 4861 scope.go:117] "RemoveContainer" containerID="5cd57e65f88464752a92643ee3e7cc60742238c5855798538a438e99542da273" Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.700917 4861 scope.go:117] "RemoveContainer" containerID="214860af9139ec2929a743eb13e1ab9faea93679a7e4f0231949c3c1db191f22" Feb 19 13:30:20 crc kubenswrapper[4861]: W0219 13:30:20.867554 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf07235b0_c2aa_4225_93e7_a408f7317082.slice/crio-c8d996ed27c23d66f062300c17a08ded5f81eee762a05dadadfb653917f30c58 WatchSource:0}: Error finding container c8d996ed27c23d66f062300c17a08ded5f81eee762a05dadadfb653917f30c58: Status 404 returned error can't find the container with id c8d996ed27c23d66f062300c17a08ded5f81eee762a05dadadfb653917f30c58 Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.870281 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wlwdw"] Feb 19 13:30:20 crc kubenswrapper[4861]: I0219 13:30:20.947683 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:30:20 crc kubenswrapper[4861]: W0219 13:30:20.950288 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bebc25b_fd66_4fca_9a39_d54671b7492d.slice/crio-40bfd4c50d461cd0f765a1acb134bfc938422607370f99e1214bf615f877c4c4 WatchSource:0}: Error finding container 40bfd4c50d461cd0f765a1acb134bfc938422607370f99e1214bf615f877c4c4: Status 404 returned error can't find the container with id 40bfd4c50d461cd0f765a1acb134bfc938422607370f99e1214bf615f877c4c4 Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.077051 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:30:21 crc kubenswrapper[4861]: W0219 13:30:21.082838 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode69db72c_2c51_47ba_b2a1_037d6b259176.slice/crio-04b78d7c58cef08bd7aa4566c7122087bddee8a5883fbdf28664670b483dec7b WatchSource:0}: Error finding container 04b78d7c58cef08bd7aa4566c7122087bddee8a5883fbdf28664670b483dec7b: Status 404 returned error can't find the container with id 04b78d7c58cef08bd7aa4566c7122087bddee8a5883fbdf28664670b483dec7b Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.628078 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlwdw" event={"ID":"f07235b0-c2aa-4225-93e7-a408f7317082","Type":"ContainerStarted","Data":"70f31a21639362c5c9feca912543de51968f17736468113ba6cce7a5e966e4ae"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.628124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlwdw" event={"ID":"f07235b0-c2aa-4225-93e7-a408f7317082","Type":"ContainerStarted","Data":"c8d996ed27c23d66f062300c17a08ded5f81eee762a05dadadfb653917f30c58"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.634357 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9zrp" event={"ID":"cbc6223b-76c7-40be-8245-81263bc7c6c6","Type":"ContainerStarted","Data":"544ce3ea2021ed17c32df42f201533600ccba00f725b626c9de59972338ccb90"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.640869 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hblgk" event={"ID":"0c15a88f-af04-496c-bc54-f001ba15580a","Type":"ContainerStarted","Data":"46e6d5cc6944a8d2d07aeb6ab0574a804c9017f54cd371274876c7a783498beb"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.643568 4861 generic.go:334] "Generic (PLEG): container finished" podID="48707538-eeb6-42d9-918f-6b22a07cae71" containerID="8b91b15fff839ba5df5868715813fffaafecb640a519d3a5cb4a17ec2708a678" exitCode=0 Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.643621 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kwkgq" event={"ID":"48707538-eeb6-42d9-918f-6b22a07cae71","Type":"ContainerDied","Data":"8b91b15fff839ba5df5868715813fffaafecb640a519d3a5cb4a17ec2708a678"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.668345 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69db72c-2c51-47ba-b2a1-037d6b259176","Type":"ContainerStarted","Data":"04b78d7c58cef08bd7aa4566c7122087bddee8a5883fbdf28664670b483dec7b"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.679475 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wlwdw" podStartSLOduration=9.679451586 podStartE2EDuration="9.679451586s" podCreationTimestamp="2026-02-19 13:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:21.669369154 +0000 UTC m=+1236.330472382" watchObservedRunningTime="2026-02-19 13:30:21.679451586 +0000 UTC m=+1236.340554814" Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.685815 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerStarted","Data":"7e849cd5e5366538157296d1d75c1308f3ed372130d98238cf173c54acc5b242"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.698622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bebc25b-fd66-4fca-9a39-d54671b7492d","Type":"ContainerStarted","Data":"5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.698680 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bebc25b-fd66-4fca-9a39-d54671b7492d","Type":"ContainerStarted","Data":"40bfd4c50d461cd0f765a1acb134bfc938422607370f99e1214bf615f877c4c4"} Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.714685 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f9zrp" podStartSLOduration=3.486377934 podStartE2EDuration="23.714662474s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="2026-02-19 13:30:00.092318142 +0000 UTC m=+1214.753421360" lastFinishedPulling="2026-02-19 13:30:20.320602662 +0000 UTC m=+1234.981705900" observedRunningTime="2026-02-19 13:30:21.708296123 +0000 UTC m=+1236.369399381" watchObservedRunningTime="2026-02-19 13:30:21.714662474 +0000 UTC m=+1236.375765702" Feb 19 13:30:21 crc kubenswrapper[4861]: I0219 13:30:21.736633 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hblgk" podStartSLOduration=3.326945981 podStartE2EDuration="23.736611235s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="2026-02-19 13:29:59.975954809 +0000 UTC m=+1214.637058037" lastFinishedPulling="2026-02-19 13:30:20.385620063 +0000 UTC m=+1235.046723291" observedRunningTime="2026-02-19 13:30:21.73083469 +0000 UTC m=+1236.391937938" watchObservedRunningTime="2026-02-19 13:30:21.736611235 +0000 UTC m=+1236.397714463" Feb 19 13:30:22 crc kubenswrapper[4861]: I0219 13:30:22.370510 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-bjwqj" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Feb 19 13:30:22 crc kubenswrapper[4861]: I0219 13:30:22.725094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerStarted","Data":"563e6034bf68d39843ed6d4c780be5dadc7fac915cb11be14284cd0c542973b1"} Feb 19 13:30:22 crc kubenswrapper[4861]: I0219 13:30:22.728723 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bebc25b-fd66-4fca-9a39-d54671b7492d","Type":"ContainerStarted","Data":"5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74"} Feb 19 13:30:22 crc kubenswrapper[4861]: I0219 13:30:22.733361 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69db72c-2c51-47ba-b2a1-037d6b259176","Type":"ContainerStarted","Data":"6a550e43b1af7f7d21f03635868d9dd6ef349c47147f1011f4a120cce15d8b69"} Feb 19 13:30:22 crc kubenswrapper[4861]: I0219 13:30:22.758919 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.758897745 podStartE2EDuration="11.758897745s" podCreationTimestamp="2026-02-19 13:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:22.755506414 +0000 UTC m=+1237.416609662" watchObservedRunningTime="2026-02-19 13:30:22.758897745 +0000 UTC m=+1237.420000973" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.163461 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.229058 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2l28\" (UniqueName: \"kubernetes.io/projected/48707538-eeb6-42d9-918f-6b22a07cae71-kube-api-access-v2l28\") pod \"48707538-eeb6-42d9-918f-6b22a07cae71\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.229193 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-combined-ca-bundle\") pod \"48707538-eeb6-42d9-918f-6b22a07cae71\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.229299 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-config\") pod \"48707538-eeb6-42d9-918f-6b22a07cae71\" (UID: \"48707538-eeb6-42d9-918f-6b22a07cae71\") " Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.237054 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48707538-eeb6-42d9-918f-6b22a07cae71-kube-api-access-v2l28" (OuterVolumeSpecName: "kube-api-access-v2l28") pod "48707538-eeb6-42d9-918f-6b22a07cae71" (UID: "48707538-eeb6-42d9-918f-6b22a07cae71"). InnerVolumeSpecName "kube-api-access-v2l28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.263866 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48707538-eeb6-42d9-918f-6b22a07cae71" (UID: "48707538-eeb6-42d9-918f-6b22a07cae71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.267515 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-config" (OuterVolumeSpecName: "config") pod "48707538-eeb6-42d9-918f-6b22a07cae71" (UID: "48707538-eeb6-42d9-918f-6b22a07cae71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.331505 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2l28\" (UniqueName: \"kubernetes.io/projected/48707538-eeb6-42d9-918f-6b22a07cae71-kube-api-access-v2l28\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.331547 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.331559 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/48707538-eeb6-42d9-918f-6b22a07cae71-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.747136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69db72c-2c51-47ba-b2a1-037d6b259176","Type":"ContainerStarted","Data":"c862500249e4abdc0b9a4e4969de8d52ba9eeb7e3d69d071ec6f4c12254fee12"} Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.764228 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kwkgq" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.764386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kwkgq" event={"ID":"48707538-eeb6-42d9-918f-6b22a07cae71","Type":"ContainerDied","Data":"9cbc88edcdf56e4495d7e3c93417166fc35b0627e43510b726cb33e1bf7a017c"} Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.764431 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cbc88edcdf56e4495d7e3c93417166fc35b0627e43510b726cb33e1bf7a017c" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.790449 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.790430315 podStartE2EDuration="15.790430315s" podCreationTimestamp="2026-02-19 13:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:23.785690298 +0000 UTC m=+1238.446793526" watchObservedRunningTime="2026-02-19 13:30:23.790430315 +0000 UTC m=+1238.451533543" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.867635 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-4bvnr"] Feb 19 13:30:23 crc kubenswrapper[4861]: E0219 13:30:23.867984 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="dnsmasq-dns" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.867996 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="dnsmasq-dns" Feb 19 13:30:23 crc kubenswrapper[4861]: E0219 13:30:23.868010 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="init" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.868016 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="init" Feb 19 13:30:23 crc kubenswrapper[4861]: E0219 13:30:23.868025 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48707538-eeb6-42d9-918f-6b22a07cae71" containerName="neutron-db-sync" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.868033 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="48707538-eeb6-42d9-918f-6b22a07cae71" containerName="neutron-db-sync" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.868185 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="638f49e9-45bb-4106-a3bc-a53a23fbc313" containerName="dnsmasq-dns" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.868202 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="48707538-eeb6-42d9-918f-6b22a07cae71" containerName="neutron-db-sync" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.870353 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.896790 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-4bvnr"] Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.942493 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.942552 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.942578 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk2k\" (UniqueName: \"kubernetes.io/projected/1e268516-d31c-4e80-884b-3fc40f8ab3d8-kube-api-access-mpk2k\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.942623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.942666 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-config\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.942726 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.949516 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f6c7cb7cb-j5rts"] Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.951467 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.955597 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.955916 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.956063 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4km8j" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.957758 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 13:30:23 crc kubenswrapper[4861]: I0219 13:30:23.964785 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f6c7cb7cb-j5rts"] Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047061 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-config\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047654 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-httpd-config\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047694 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-ovndb-tls-certs\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047750 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-config\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047767 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtt45\" (UniqueName: \"kubernetes.io/projected/274eb243-db3d-4ad8-b2cd-2ff23017ac82-kube-api-access-xtt45\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047801 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047853 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-combined-ca-bundle\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047891 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047923 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047951 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk2k\" (UniqueName: \"kubernetes.io/projected/1e268516-d31c-4e80-884b-3fc40f8ab3d8-kube-api-access-mpk2k\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.047995 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.048353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-config\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.049035 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.050112 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.050244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.053459 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.072932 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk2k\" (UniqueName: \"kubernetes.io/projected/1e268516-d31c-4e80-884b-3fc40f8ab3d8-kube-api-access-mpk2k\") pod \"dnsmasq-dns-77d55b9c69-4bvnr\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.149511 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-httpd-config\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.149584 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-ovndb-tls-certs\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.149629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-config\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.149651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtt45\" (UniqueName: \"kubernetes.io/projected/274eb243-db3d-4ad8-b2cd-2ff23017ac82-kube-api-access-xtt45\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.149698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-combined-ca-bundle\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.154994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-httpd-config\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.155286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-config\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.155989 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-ovndb-tls-certs\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.159730 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-combined-ca-bundle\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.166139 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtt45\" (UniqueName: \"kubernetes.io/projected/274eb243-db3d-4ad8-b2cd-2ff23017ac82-kube-api-access-xtt45\") pod \"neutron-5f6c7cb7cb-j5rts\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.206553 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.277655 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.722368 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-4bvnr"] Feb 19 13:30:24 crc kubenswrapper[4861]: W0219 13:30:24.729379 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e268516_d31c_4e80_884b_3fc40f8ab3d8.slice/crio-223e2347315cce19d0efbc01d17249c3f0c092636c2285f4ed65652275c5fbb2 WatchSource:0}: Error finding container 223e2347315cce19d0efbc01d17249c3f0c092636c2285f4ed65652275c5fbb2: Status 404 returned error can't find the container with id 223e2347315cce19d0efbc01d17249c3f0c092636c2285f4ed65652275c5fbb2 Feb 19 13:30:24 crc kubenswrapper[4861]: E0219 13:30:24.744781 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf07235b0_c2aa_4225_93e7_a408f7317082.slice/crio-conmon-70f31a21639362c5c9feca912543de51968f17736468113ba6cce7a5e966e4ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf07235b0_c2aa_4225_93e7_a408f7317082.slice/crio-70f31a21639362c5c9feca912543de51968f17736468113ba6cce7a5e966e4ae.scope\": RecentStats: unable to find data in memory cache]" Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.776047 4861 generic.go:334] "Generic (PLEG): container finished" podID="0c15a88f-af04-496c-bc54-f001ba15580a" containerID="46e6d5cc6944a8d2d07aeb6ab0574a804c9017f54cd371274876c7a783498beb" exitCode=0 Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.776551 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hblgk" event={"ID":"0c15a88f-af04-496c-bc54-f001ba15580a","Type":"ContainerDied","Data":"46e6d5cc6944a8d2d07aeb6ab0574a804c9017f54cd371274876c7a783498beb"} Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.778576 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" event={"ID":"1e268516-d31c-4e80-884b-3fc40f8ab3d8","Type":"ContainerStarted","Data":"223e2347315cce19d0efbc01d17249c3f0c092636c2285f4ed65652275c5fbb2"} Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.779956 4861 generic.go:334] "Generic (PLEG): container finished" podID="f07235b0-c2aa-4225-93e7-a408f7317082" containerID="70f31a21639362c5c9feca912543de51968f17736468113ba6cce7a5e966e4ae" exitCode=0 Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.779987 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlwdw" event={"ID":"f07235b0-c2aa-4225-93e7-a408f7317082","Type":"ContainerDied","Data":"70f31a21639362c5c9feca912543de51968f17736468113ba6cce7a5e966e4ae"} Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.781224 4861 generic.go:334] "Generic (PLEG): container finished" podID="cbc6223b-76c7-40be-8245-81263bc7c6c6" containerID="544ce3ea2021ed17c32df42f201533600ccba00f725b626c9de59972338ccb90" exitCode=0 Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.782257 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9zrp" event={"ID":"cbc6223b-76c7-40be-8245-81263bc7c6c6","Type":"ContainerDied","Data":"544ce3ea2021ed17c32df42f201533600ccba00f725b626c9de59972338ccb90"} Feb 19 13:30:24 crc kubenswrapper[4861]: I0219 13:30:24.917785 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f6c7cb7cb-j5rts"] Feb 19 13:30:24 crc kubenswrapper[4861]: W0219 13:30:24.920627 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod274eb243_db3d_4ad8_b2cd_2ff23017ac82.slice/crio-14f7e3c2a46854cc6166efc5500aa811c8356a8b962e981c0dbe82271f0ce5aa WatchSource:0}: Error finding container 14f7e3c2a46854cc6166efc5500aa811c8356a8b962e981c0dbe82271f0ce5aa: Status 404 returned error can't find the container with id 14f7e3c2a46854cc6166efc5500aa811c8356a8b962e981c0dbe82271f0ce5aa Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.790668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f6c7cb7cb-j5rts" event={"ID":"274eb243-db3d-4ad8-b2cd-2ff23017ac82","Type":"ContainerStarted","Data":"9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d"} Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.791306 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f6c7cb7cb-j5rts" event={"ID":"274eb243-db3d-4ad8-b2cd-2ff23017ac82","Type":"ContainerStarted","Data":"f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc"} Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.791327 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.791353 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f6c7cb7cb-j5rts" event={"ID":"274eb243-db3d-4ad8-b2cd-2ff23017ac82","Type":"ContainerStarted","Data":"14f7e3c2a46854cc6166efc5500aa811c8356a8b962e981c0dbe82271f0ce5aa"} Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.792009 4861 generic.go:334] "Generic (PLEG): container finished" podID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerID="fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894" exitCode=0 Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.792061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" event={"ID":"1e268516-d31c-4e80-884b-3fc40f8ab3d8","Type":"ContainerDied","Data":"fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894"} Feb 19 13:30:25 crc kubenswrapper[4861]: I0219 13:30:25.833036 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f6c7cb7cb-j5rts" podStartSLOduration=2.8330003809999997 podStartE2EDuration="2.833000381s" podCreationTimestamp="2026-02-19 13:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:25.806378485 +0000 UTC m=+1240.467481753" watchObservedRunningTime="2026-02-19 13:30:25.833000381 +0000 UTC m=+1240.494103619" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.550951 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d9f88bd7-zq9nt"] Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.554133 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.557724 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.558661 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.585257 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d9f88bd7-zq9nt"] Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.631252 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-config\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.632215 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-combined-ca-bundle\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.632293 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-ovndb-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.632332 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-internal-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.632360 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-httpd-config\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.632484 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5td4l\" (UniqueName: \"kubernetes.io/projected/8bbff609-754e-4955-8495-e3e1de7c0e05-kube-api-access-5td4l\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.632517 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-public-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-config\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734344 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-combined-ca-bundle\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734388 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-ovndb-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-internal-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734502 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-httpd-config\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734636 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5td4l\" (UniqueName: \"kubernetes.io/projected/8bbff609-754e-4955-8495-e3e1de7c0e05-kube-api-access-5td4l\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.734660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-public-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.740315 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-ovndb-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.740320 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-combined-ca-bundle\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.740497 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-public-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.740582 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-internal-tls-certs\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.741853 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-httpd-config\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.752743 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-config\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.754679 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5td4l\" (UniqueName: \"kubernetes.io/projected/8bbff609-754e-4955-8495-e3e1de7c0e05-kube-api-access-5td4l\") pod \"neutron-59d9f88bd7-zq9nt\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:26 crc kubenswrapper[4861]: I0219 13:30:26.889348 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.157006 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.181662 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9zrp" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.203372 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hblgk" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-config-data\") pod \"f07235b0-c2aa-4225-93e7-a408f7317082\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz52d\" (UniqueName: \"kubernetes.io/projected/0c15a88f-af04-496c-bc54-f001ba15580a-kube-api-access-kz52d\") pod \"0c15a88f-af04-496c-bc54-f001ba15580a\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269712 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-scripts\") pod \"f07235b0-c2aa-4225-93e7-a408f7317082\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269758 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc6223b-76c7-40be-8245-81263bc7c6c6-logs\") pod \"cbc6223b-76c7-40be-8245-81263bc7c6c6\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269788 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nmvp\" (UniqueName: \"kubernetes.io/projected/f07235b0-c2aa-4225-93e7-a408f7317082-kube-api-access-2nmvp\") pod \"f07235b0-c2aa-4225-93e7-a408f7317082\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269836 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-fernet-keys\") pod \"f07235b0-c2aa-4225-93e7-a408f7317082\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269924 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vffx\" (UniqueName: \"kubernetes.io/projected/cbc6223b-76c7-40be-8245-81263bc7c6c6-kube-api-access-9vffx\") pod \"cbc6223b-76c7-40be-8245-81263bc7c6c6\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269944 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-credential-keys\") pod \"f07235b0-c2aa-4225-93e7-a408f7317082\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.269966 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-scripts\") pod \"cbc6223b-76c7-40be-8245-81263bc7c6c6\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.270022 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-combined-ca-bundle\") pod \"0c15a88f-af04-496c-bc54-f001ba15580a\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.270073 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-db-sync-config-data\") pod \"0c15a88f-af04-496c-bc54-f001ba15580a\" (UID: \"0c15a88f-af04-496c-bc54-f001ba15580a\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.270117 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-config-data\") pod \"cbc6223b-76c7-40be-8245-81263bc7c6c6\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.270228 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-combined-ca-bundle\") pod \"f07235b0-c2aa-4225-93e7-a408f7317082\" (UID: \"f07235b0-c2aa-4225-93e7-a408f7317082\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.270249 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-combined-ca-bundle\") pod \"cbc6223b-76c7-40be-8245-81263bc7c6c6\" (UID: \"cbc6223b-76c7-40be-8245-81263bc7c6c6\") " Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.289600 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-scripts" (OuterVolumeSpecName: "scripts") pod "f07235b0-c2aa-4225-93e7-a408f7317082" (UID: "f07235b0-c2aa-4225-93e7-a408f7317082"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.289802 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0c15a88f-af04-496c-bc54-f001ba15580a" (UID: "0c15a88f-af04-496c-bc54-f001ba15580a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.291790 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f07235b0-c2aa-4225-93e7-a408f7317082" (UID: "f07235b0-c2aa-4225-93e7-a408f7317082"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.292633 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc6223b-76c7-40be-8245-81263bc7c6c6-kube-api-access-9vffx" (OuterVolumeSpecName: "kube-api-access-9vffx") pod "cbc6223b-76c7-40be-8245-81263bc7c6c6" (UID: "cbc6223b-76c7-40be-8245-81263bc7c6c6"). InnerVolumeSpecName "kube-api-access-9vffx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.293874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc6223b-76c7-40be-8245-81263bc7c6c6-logs" (OuterVolumeSpecName: "logs") pod "cbc6223b-76c7-40be-8245-81263bc7c6c6" (UID: "cbc6223b-76c7-40be-8245-81263bc7c6c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.300467 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07235b0-c2aa-4225-93e7-a408f7317082-kube-api-access-2nmvp" (OuterVolumeSpecName: "kube-api-access-2nmvp") pod "f07235b0-c2aa-4225-93e7-a408f7317082" (UID: "f07235b0-c2aa-4225-93e7-a408f7317082"). InnerVolumeSpecName "kube-api-access-2nmvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.300581 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c15a88f-af04-496c-bc54-f001ba15580a-kube-api-access-kz52d" (OuterVolumeSpecName: "kube-api-access-kz52d") pod "0c15a88f-af04-496c-bc54-f001ba15580a" (UID: "0c15a88f-af04-496c-bc54-f001ba15580a"). InnerVolumeSpecName "kube-api-access-kz52d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.300965 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-scripts" (OuterVolumeSpecName: "scripts") pod "cbc6223b-76c7-40be-8245-81263bc7c6c6" (UID: "cbc6223b-76c7-40be-8245-81263bc7c6c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.318909 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f07235b0-c2aa-4225-93e7-a408f7317082" (UID: "f07235b0-c2aa-4225-93e7-a408f7317082"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.320892 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-config-data" (OuterVolumeSpecName: "config-data") pod "f07235b0-c2aa-4225-93e7-a408f7317082" (UID: "f07235b0-c2aa-4225-93e7-a408f7317082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.322906 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f07235b0-c2aa-4225-93e7-a408f7317082" (UID: "f07235b0-c2aa-4225-93e7-a408f7317082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.332031 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbc6223b-76c7-40be-8245-81263bc7c6c6" (UID: "cbc6223b-76c7-40be-8245-81263bc7c6c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.335956 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-config-data" (OuterVolumeSpecName: "config-data") pod "cbc6223b-76c7-40be-8245-81263bc7c6c6" (UID: "cbc6223b-76c7-40be-8245-81263bc7c6c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.336008 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c15a88f-af04-496c-bc54-f001ba15580a" (UID: "0c15a88f-af04-496c-bc54-f001ba15580a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.371947 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.371994 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vffx\" (UniqueName: \"kubernetes.io/projected/cbc6223b-76c7-40be-8245-81263bc7c6c6-kube-api-access-9vffx\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372005 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372017 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372026 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372058 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0c15a88f-af04-496c-bc54-f001ba15580a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372068 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372076 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372084 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc6223b-76c7-40be-8245-81263bc7c6c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372092 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372101 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz52d\" (UniqueName: \"kubernetes.io/projected/0c15a88f-af04-496c-bc54-f001ba15580a-kube-api-access-kz52d\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372109 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07235b0-c2aa-4225-93e7-a408f7317082-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372118 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc6223b-76c7-40be-8245-81263bc7c6c6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.372125 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nmvp\" (UniqueName: \"kubernetes.io/projected/f07235b0-c2aa-4225-93e7-a408f7317082-kube-api-access-2nmvp\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.538209 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d9f88bd7-zq9nt"] Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.824154 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9zrp" event={"ID":"cbc6223b-76c7-40be-8245-81263bc7c6c6","Type":"ContainerDied","Data":"850be74f0634cd5da1d0c8b94a74fdffe31b487ad90ab1bfb18692c8afa801ce"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.824198 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850be74f0634cd5da1d0c8b94a74fdffe31b487ad90ab1bfb18692c8afa801ce" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.824212 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9zrp" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.826915 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f88bd7-zq9nt" event={"ID":"8bbff609-754e-4955-8495-e3e1de7c0e05","Type":"ContainerStarted","Data":"97fea45c3ade3e0a17678507ba1b8509b11c1b8398e3c984f00dc9c647fdeb6f"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.826986 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f88bd7-zq9nt" event={"ID":"8bbff609-754e-4955-8495-e3e1de7c0e05","Type":"ContainerStarted","Data":"1cf7aadd098f564ffe1df71b7df0d97d11d71753ecebb0cdd461445946c47612"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.832413 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerStarted","Data":"4b66180c1790c564b6ebbc0a6a6ada0b48520072f168d8e7342eb13c56b83f78"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.834560 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hblgk" event={"ID":"0c15a88f-af04-496c-bc54-f001ba15580a","Type":"ContainerDied","Data":"2fb81f110a8a54bc87860721621fe821a159ddf2ed03a16a9ed4db11e038e339"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.834586 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb81f110a8a54bc87860721621fe821a159ddf2ed03a16a9ed4db11e038e339" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.834643 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hblgk" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.840322 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" event={"ID":"1e268516-d31c-4e80-884b-3fc40f8ab3d8","Type":"ContainerStarted","Data":"f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.841393 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.849907 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlwdw" event={"ID":"f07235b0-c2aa-4225-93e7-a408f7317082","Type":"ContainerDied","Data":"c8d996ed27c23d66f062300c17a08ded5f81eee762a05dadadfb653917f30c58"} Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.849936 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d996ed27c23d66f062300c17a08ded5f81eee762a05dadadfb653917f30c58" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.850003 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlwdw" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.873233 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" podStartSLOduration=5.873216034 podStartE2EDuration="5.873216034s" podCreationTimestamp="2026-02-19 13:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:28.864550081 +0000 UTC m=+1243.525653309" watchObservedRunningTime="2026-02-19 13:30:28.873216034 +0000 UTC m=+1243.534319262" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.918269 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.921008 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.968297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:28 crc kubenswrapper[4861]: I0219 13:30:28.978182 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.331307 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c9494f487-fzm28"] Feb 19 13:30:29 crc kubenswrapper[4861]: E0219 13:30:29.331738 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc6223b-76c7-40be-8245-81263bc7c6c6" containerName="placement-db-sync" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.331751 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc6223b-76c7-40be-8245-81263bc7c6c6" containerName="placement-db-sync" Feb 19 13:30:29 crc kubenswrapper[4861]: E0219 13:30:29.331761 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c15a88f-af04-496c-bc54-f001ba15580a" containerName="barbican-db-sync" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.331768 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c15a88f-af04-496c-bc54-f001ba15580a" containerName="barbican-db-sync" Feb 19 13:30:29 crc kubenswrapper[4861]: E0219 13:30:29.331782 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07235b0-c2aa-4225-93e7-a408f7317082" containerName="keystone-bootstrap" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.331788 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07235b0-c2aa-4225-93e7-a408f7317082" containerName="keystone-bootstrap" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.331956 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c15a88f-af04-496c-bc54-f001ba15580a" containerName="barbican-db-sync" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.331994 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc6223b-76c7-40be-8245-81263bc7c6c6" containerName="placement-db-sync" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.332008 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07235b0-c2aa-4225-93e7-a408f7317082" containerName="keystone-bootstrap" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.332571 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.337690 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c9494f487-fzm28"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.338147 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.355799 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.362358 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.362395 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.362632 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.362768 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bsjbt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.375598 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7549c5f8db-8jjpm"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.377204 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.382050 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.386084 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.386536 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-td4kr" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.391764 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-internal-tls-certs\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.391984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-scripts\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.391989 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392035 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392307 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-config-data\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392454 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-public-tls-certs\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392562 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-fernet-keys\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-credential-keys\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392784 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khktc\" (UniqueName: \"kubernetes.io/projected/382166c8-355e-407b-9721-3eee34966095-kube-api-access-khktc\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.392931 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-combined-ca-bundle\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.403035 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7549c5f8db-8jjpm"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495471 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-internal-tls-certs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495520 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-combined-ca-bundle\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495542 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-logs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495568 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-fernet-keys\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495593 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-credential-keys\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495621 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khktc\" (UniqueName: \"kubernetes.io/projected/382166c8-355e-407b-9721-3eee34966095-kube-api-access-khktc\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495647 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-combined-ca-bundle\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495675 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-scripts\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495706 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-internal-tls-certs\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-scripts\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495752 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-config-data\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-config-data\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495797 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-public-tls-certs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495835 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drn56\" (UniqueName: \"kubernetes.io/projected/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-kube-api-access-drn56\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.495858 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-public-tls-certs\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.503953 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-credential-keys\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.505569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-combined-ca-bundle\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.527975 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-fernet-keys\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.528073 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-config-data\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.530199 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-public-tls-certs\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.535661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-scripts\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.536245 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-internal-tls-certs\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.552520 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68c66cf485-7rs8h"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.554018 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.559706 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.560058 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.560334 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jfbdv" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.565339 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-697dc9f75d-t2chc"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.566936 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.572132 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khktc\" (UniqueName: \"kubernetes.io/projected/382166c8-355e-407b-9721-3eee34966095-kube-api-access-khktc\") pod \"keystone-6c9494f487-fzm28\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.581818 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598552 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598616 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data-custom\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598672 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-scripts\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598703 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-combined-ca-bundle\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598782 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-config-data\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598813 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22sj\" (UniqueName: \"kubernetes.io/projected/62993e1b-6031-4438-b32a-c0d721d4870d-kube-api-access-p22sj\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598837 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62993e1b-6031-4438-b32a-c0d721d4870d-logs\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598857 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-public-tls-certs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.598925 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drn56\" (UniqueName: \"kubernetes.io/projected/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-kube-api-access-drn56\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.600978 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-internal-tls-certs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.601033 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-combined-ca-bundle\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.601067 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-logs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.601462 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-logs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.614835 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-scripts\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.615806 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-internal-tls-certs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.616343 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-public-tls-certs\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.621200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-config-data\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.628389 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-combined-ca-bundle\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.666813 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.687107 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drn56\" (UniqueName: \"kubernetes.io/projected/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-kube-api-access-drn56\") pod \"placement-7549c5f8db-8jjpm\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702353 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data-custom\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702433 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05baff31-913d-47c0-92c1-0ec2085039ba-logs\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702456 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data-custom\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702483 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-combined-ca-bundle\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702540 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p22sj\" (UniqueName: \"kubernetes.io/projected/62993e1b-6031-4438-b32a-c0d721d4870d-kube-api-access-p22sj\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702561 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvqp\" (UniqueName: \"kubernetes.io/projected/05baff31-913d-47c0-92c1-0ec2085039ba-kube-api-access-tqvqp\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702581 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62993e1b-6031-4438-b32a-c0d721d4870d-logs\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702620 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-combined-ca-bundle\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.702696 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.703881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62993e1b-6031-4438-b32a-c0d721d4870d-logs\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.707319 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.714611 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data-custom\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.726883 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-combined-ca-bundle\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.754884 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.763370 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6799fd8d6-p6tpl"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.772238 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.788369 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22sj\" (UniqueName: \"kubernetes.io/projected/62993e1b-6031-4438-b32a-c0d721d4870d-kube-api-access-p22sj\") pod \"barbican-worker-68c66cf485-7rs8h\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.789805 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c66cf485-7rs8h"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.806336 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05baff31-913d-47c0-92c1-0ec2085039ba-logs\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.806376 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data-custom\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.806467 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvqp\" (UniqueName: \"kubernetes.io/projected/05baff31-913d-47c0-92c1-0ec2085039ba-kube-api-access-tqvqp\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.806525 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-combined-ca-bundle\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.806590 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.813974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05baff31-913d-47c0-92c1-0ec2085039ba-logs\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.814889 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.829443 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-combined-ca-bundle\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.844706 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data-custom\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.845655 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-545d79c874-vmrzt"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.848764 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.859751 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvqp\" (UniqueName: \"kubernetes.io/projected/05baff31-913d-47c0-92c1-0ec2085039ba-kube-api-access-tqvqp\") pod \"barbican-keystone-listener-697dc9f75d-t2chc\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.892850 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6ffbcbb99f-phcxs"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.894478 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.907966 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-combined-ca-bundle\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908012 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data-custom\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908042 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-scripts\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908072 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-logs\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908102 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckddl\" (UniqueName: \"kubernetes.io/projected/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-kube-api-access-ckddl\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908138 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-combined-ca-bundle\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908155 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-config-data\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908177 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46d0ac5c-1d20-4b80-be1b-21ad2641b215-logs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908192 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908210 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-public-tls-certs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908236 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flwz\" (UniqueName: \"kubernetes.io/projected/46d0ac5c-1d20-4b80-be1b-21ad2641b215-kube-api-access-7flwz\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.908288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-internal-tls-certs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.948656 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6799fd8d6-p6tpl"] Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.949139 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f88bd7-zq9nt" event={"ID":"8bbff609-754e-4955-8495-e3e1de7c0e05","Type":"ContainerStarted","Data":"202486ab1c04a67d2cc963f93176c1a28491a7ad70e6b1662d3833b2cf389788"} Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.950273 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.950294 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:29 crc kubenswrapper[4861]: I0219 13:30:29.950306 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.003673 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-545d79c874-vmrzt"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.011563 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-697dc9f75d-t2chc"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012489 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-combined-ca-bundle\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012567 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-config-data\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012603 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-combined-ca-bundle\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46d0ac5c-1d20-4b80-be1b-21ad2641b215-logs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012652 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4307ff9-78bb-48ec-8096-6e06ff22e19b-logs\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012676 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-public-tls-certs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012788 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flwz\" (UniqueName: \"kubernetes.io/projected/46d0ac5c-1d20-4b80-be1b-21ad2641b215-kube-api-access-7flwz\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbhx\" (UniqueName: \"kubernetes.io/projected/a4307ff9-78bb-48ec-8096-6e06ff22e19b-kube-api-access-bpbhx\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.012950 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-internal-tls-certs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013005 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-combined-ca-bundle\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013042 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data-custom\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013077 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-scripts\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013138 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-logs\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013188 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data-custom\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013227 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckddl\" (UniqueName: \"kubernetes.io/projected/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-kube-api-access-ckddl\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.013302 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.021110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-internal-tls-certs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.021582 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-public-tls-certs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.022716 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46d0ac5c-1d20-4b80-be1b-21ad2641b215-logs\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.023075 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-logs\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.024924 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-combined-ca-bundle\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.025885 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-combined-ca-bundle\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.028887 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-config-data\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.032875 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-scripts\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.037579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.050197 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-4bvnr"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.057897 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.058658 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6ffbcbb99f-phcxs"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.059904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flwz\" (UniqueName: \"kubernetes.io/projected/46d0ac5c-1d20-4b80-be1b-21ad2641b215-kube-api-access-7flwz\") pod \"placement-6799fd8d6-p6tpl\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.059979 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckddl\" (UniqueName: \"kubernetes.io/projected/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-kube-api-access-ckddl\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.065124 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data-custom\") pod \"barbican-keystone-listener-545d79c874-vmrzt\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.067008 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-dzd74"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.069083 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.070440 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.093575 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-dzd74"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.138819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbhx\" (UniqueName: \"kubernetes.io/projected/a4307ff9-78bb-48ec-8096-6e06ff22e19b-kube-api-access-bpbhx\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.138990 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139016 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data-custom\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139134 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-svc\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139198 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-combined-ca-bundle\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139227 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4307ff9-78bb-48ec-8096-6e06ff22e19b-logs\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139339 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139360 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwk8\" (UniqueName: \"kubernetes.io/projected/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-kube-api-access-7rwk8\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.139412 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-config\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.143259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4307ff9-78bb-48ec-8096-6e06ff22e19b-logs\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.153303 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-combined-ca-bundle\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.154917 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.157912 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data-custom\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.158859 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.159571 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbhx\" (UniqueName: \"kubernetes.io/projected/a4307ff9-78bb-48ec-8096-6e06ff22e19b-kube-api-access-bpbhx\") pod \"barbican-worker-6ffbcbb99f-phcxs\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.182620 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.208941 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b99dd6c84-fwhqt"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.211328 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.215898 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.238005 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b99dd6c84-fwhqt"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.244296 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.244694 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.244830 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-svc\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.244877 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.244990 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data-custom\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245027 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-logs\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245061 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245086 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwk8\" (UniqueName: \"kubernetes.io/projected/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-kube-api-access-7rwk8\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-combined-ca-bundle\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-config\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245216 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8ct\" (UniqueName: \"kubernetes.io/projected/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-kube-api-access-jq8ct\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.245381 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.246125 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.246180 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-svc\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.246773 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.251736 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-config\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.264025 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59d9f88bd7-zq9nt" podStartSLOduration=4.263989328 podStartE2EDuration="4.263989328s" podCreationTimestamp="2026-02-19 13:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:29.966948409 +0000 UTC m=+1244.628051647" watchObservedRunningTime="2026-02-19 13:30:30.263989328 +0000 UTC m=+1244.925092556" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.277376 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwk8\" (UniqueName: \"kubernetes.io/projected/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-kube-api-access-7rwk8\") pod \"dnsmasq-dns-7489f6876c-dzd74\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.289505 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.348695 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-combined-ca-bundle\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.348788 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq8ct\" (UniqueName: \"kubernetes.io/projected/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-kube-api-access-jq8ct\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.348910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.348953 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data-custom\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.348975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-logs\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.359576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-logs\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.368206 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.368400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-combined-ca-bundle\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.368254 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data-custom\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.376510 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq8ct\" (UniqueName: \"kubernetes.io/projected/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-kube-api-access-jq8ct\") pod \"barbican-api-5b99dd6c84-fwhqt\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.442970 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7549c5f8db-8jjpm"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.467513 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c9494f487-fzm28"] Feb 19 13:30:30 crc kubenswrapper[4861]: W0219 13:30:30.491716 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod382166c8_355e_407b_9721_3eee34966095.slice/crio-8803783c7750bf2430911b3f45cc1b1884f612db8facb09f4253866c4a458a13 WatchSource:0}: Error finding container 8803783c7750bf2430911b3f45cc1b1884f612db8facb09f4253866c4a458a13: Status 404 returned error can't find the container with id 8803783c7750bf2430911b3f45cc1b1884f612db8facb09f4253866c4a458a13 Feb 19 13:30:30 crc kubenswrapper[4861]: W0219 13:30:30.511672 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61eb2f94_44a8_4db5_9d17_9c4bd7bbc551.slice/crio-0affed1b8f1419a02b34fe57e28102de06b58b19207138fb1e042bc63f558d6e WatchSource:0}: Error finding container 0affed1b8f1419a02b34fe57e28102de06b58b19207138fb1e042bc63f558d6e: Status 404 returned error can't find the container with id 0affed1b8f1419a02b34fe57e28102de06b58b19207138fb1e042bc63f558d6e Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.543265 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.580516 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.666263 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-697dc9f75d-t2chc"] Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.847206 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c66cf485-7rs8h"] Feb 19 13:30:30 crc kubenswrapper[4861]: W0219 13:30:30.881166 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62993e1b_6031_4438_b32a_c0d721d4870d.slice/crio-da3e479e4b8fd6b70ffeedd244dc885a9bc44a43869c5087ca175323d8d30f20 WatchSource:0}: Error finding container da3e479e4b8fd6b70ffeedd244dc885a9bc44a43869c5087ca175323d8d30f20: Status 404 returned error can't find the container with id da3e479e4b8fd6b70ffeedd244dc885a9bc44a43869c5087ca175323d8d30f20 Feb 19 13:30:30 crc kubenswrapper[4861]: I0219 13:30:30.941345 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-545d79c874-vmrzt"] Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.008460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7549c5f8db-8jjpm" event={"ID":"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551","Type":"ContainerStarted","Data":"0affed1b8f1419a02b34fe57e28102de06b58b19207138fb1e042bc63f558d6e"} Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.010646 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c66cf485-7rs8h" event={"ID":"62993e1b-6031-4438-b32a-c0d721d4870d","Type":"ContainerStarted","Data":"da3e479e4b8fd6b70ffeedd244dc885a9bc44a43869c5087ca175323d8d30f20"} Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.016844 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" event={"ID":"05baff31-913d-47c0-92c1-0ec2085039ba","Type":"ContainerStarted","Data":"b52fef68e7d46ccbfdc0130e9d08b4fe1e45b565aa64803989730117222c8da5"} Feb 19 13:30:31 crc kubenswrapper[4861]: W0219 13:30:31.024076 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d002f91_22f1_4ebd_8bc9_04e81e4a00ef.slice/crio-6d7a71c53242fcf62e136045d6262d60179ab6d7c9f178bb1105c59863455c6d WatchSource:0}: Error finding container 6d7a71c53242fcf62e136045d6262d60179ab6d7c9f178bb1105c59863455c6d: Status 404 returned error can't find the container with id 6d7a71c53242fcf62e136045d6262d60179ab6d7c9f178bb1105c59863455c6d Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.037886 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c9494f487-fzm28" event={"ID":"382166c8-355e-407b-9721-3eee34966095","Type":"ContainerStarted","Data":"8803783c7750bf2430911b3f45cc1b1884f612db8facb09f4253866c4a458a13"} Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.038819 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerName="dnsmasq-dns" containerID="cri-o://f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122" gracePeriod=10 Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.039638 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.119211 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6ffbcbb99f-phcxs"] Feb 19 13:30:31 crc kubenswrapper[4861]: W0219 13:30:31.122029 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4307ff9_78bb_48ec_8096_6e06ff22e19b.slice/crio-d8175cd89c4d0dc5bc6f2778856c4aced2ad9ae2cf793adc4fd6a8cb21dd7c71 WatchSource:0}: Error finding container d8175cd89c4d0dc5bc6f2778856c4aced2ad9ae2cf793adc4fd6a8cb21dd7c71: Status 404 returned error can't find the container with id d8175cd89c4d0dc5bc6f2778856c4aced2ad9ae2cf793adc4fd6a8cb21dd7c71 Feb 19 13:30:31 crc kubenswrapper[4861]: W0219 13:30:31.172061 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46d0ac5c_1d20_4b80_be1b_21ad2641b215.slice/crio-12c6e860ab8a7e9b8288e87ffad8c5d530e948c3938e5ea126ae27dd896f46d7 WatchSource:0}: Error finding container 12c6e860ab8a7e9b8288e87ffad8c5d530e948c3938e5ea126ae27dd896f46d7: Status 404 returned error can't find the container with id 12c6e860ab8a7e9b8288e87ffad8c5d530e948c3938e5ea126ae27dd896f46d7 Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.180320 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6c9494f487-fzm28" podStartSLOduration=2.180285574 podStartE2EDuration="2.180285574s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:31.073072836 +0000 UTC m=+1245.734176084" watchObservedRunningTime="2026-02-19 13:30:31.180285574 +0000 UTC m=+1245.841388802" Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.233956 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6799fd8d6-p6tpl"] Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.247257 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-dzd74"] Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.308249 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b99dd6c84-fwhqt"] Feb 19 13:30:31 crc kubenswrapper[4861]: W0219 13:30:31.339437 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e1d0ec1_d77d_4a8e_ab33_ce11c2115ac0.slice/crio-b8ac2f7703ceac6d1f6c576350c62154b55095a5fb14103ba7d391ae847e4847 WatchSource:0}: Error finding container b8ac2f7703ceac6d1f6c576350c62154b55095a5fb14103ba7d391ae847e4847: Status 404 returned error can't find the container with id b8ac2f7703ceac6d1f6c576350c62154b55095a5fb14103ba7d391ae847e4847 Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.670794 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.793334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpk2k\" (UniqueName: \"kubernetes.io/projected/1e268516-d31c-4e80-884b-3fc40f8ab3d8-kube-api-access-mpk2k\") pod \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.793403 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-sb\") pod \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.793496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-nb\") pod \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.793530 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-config\") pod \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.793663 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-svc\") pod \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.793696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-swift-storage-0\") pod \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\" (UID: \"1e268516-d31c-4e80-884b-3fc40f8ab3d8\") " Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.838046 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e268516-d31c-4e80-884b-3fc40f8ab3d8-kube-api-access-mpk2k" (OuterVolumeSpecName: "kube-api-access-mpk2k") pod "1e268516-d31c-4e80-884b-3fc40f8ab3d8" (UID: "1e268516-d31c-4e80-884b-3fc40f8ab3d8"). InnerVolumeSpecName "kube-api-access-mpk2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:31 crc kubenswrapper[4861]: I0219 13:30:31.898170 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpk2k\" (UniqueName: \"kubernetes.io/projected/1e268516-d31c-4e80-884b-3fc40f8ab3d8-kube-api-access-mpk2k\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.029107 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e268516-d31c-4e80-884b-3fc40f8ab3d8" (UID: "1e268516-d31c-4e80-884b-3fc40f8ab3d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.049269 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-config" (OuterVolumeSpecName: "config") pod "1e268516-d31c-4e80-884b-3fc40f8ab3d8" (UID: "1e268516-d31c-4e80-884b-3fc40f8ab3d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.061270 4861 generic.go:334] "Generic (PLEG): container finished" podID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerID="c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb" exitCode=0 Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.072973 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e268516-d31c-4e80-884b-3fc40f8ab3d8" (UID: "1e268516-d31c-4e80-884b-3fc40f8ab3d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.074501 4861 generic.go:334] "Generic (PLEG): container finished" podID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerID="f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122" exitCode=0 Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.074646 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.080722 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e268516-d31c-4e80-884b-3fc40f8ab3d8" (UID: "1e268516-d31c-4e80-884b-3fc40f8ab3d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.108881 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e268516-d31c-4e80-884b-3fc40f8ab3d8" (UID: "1e268516-d31c-4e80-884b-3fc40f8ab3d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.151081 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.155739 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.155771 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.155790 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.155800 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e268516-d31c-4e80-884b-3fc40f8ab3d8-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.199950 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7549c5f8db-8jjpm" podStartSLOduration=3.199928743 podStartE2EDuration="3.199928743s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:32.162099774 +0000 UTC m=+1246.823202992" watchObservedRunningTime="2026-02-19 13:30:32.199928743 +0000 UTC m=+1246.861031971" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.234968 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.234997 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.286994 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b99dd6c84-fwhqt" event={"ID":"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0","Type":"ContainerStarted","Data":"b8ac2f7703ceac6d1f6c576350c62154b55095a5fb14103ba7d391ae847e4847"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.287881 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.287900 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.287910 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" event={"ID":"ed6f0ad1-4261-4369-9270-bb6f2aabc68a","Type":"ContainerDied","Data":"c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.287928 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.287939 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288233 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288269 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" event={"ID":"ed6f0ad1-4261-4369-9270-bb6f2aabc68a","Type":"ContainerStarted","Data":"c8f532ade0da409990458aa6354e0b3af08be7278fbd4857cc3c361db4ac656e"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" event={"ID":"1e268516-d31c-4e80-884b-3fc40f8ab3d8","Type":"ContainerDied","Data":"f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288482 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-4bvnr" event={"ID":"1e268516-d31c-4e80-884b-3fc40f8ab3d8","Type":"ContainerDied","Data":"223e2347315cce19d0efbc01d17249c3f0c092636c2285f4ed65652275c5fbb2"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288496 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7549c5f8db-8jjpm" event={"ID":"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551","Type":"ContainerStarted","Data":"cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288505 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7549c5f8db-8jjpm" event={"ID":"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551","Type":"ContainerStarted","Data":"858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" event={"ID":"a4307ff9-78bb-48ec-8096-6e06ff22e19b","Type":"ContainerStarted","Data":"d8175cd89c4d0dc5bc6f2778856c4aced2ad9ae2cf793adc4fd6a8cb21dd7c71"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288528 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c9494f487-fzm28" event={"ID":"382166c8-355e-407b-9721-3eee34966095","Type":"ContainerStarted","Data":"734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288539 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6799fd8d6-p6tpl" event={"ID":"46d0ac5c-1d20-4b80-be1b-21ad2641b215","Type":"ContainerStarted","Data":"12c6e860ab8a7e9b8288e87ffad8c5d530e948c3938e5ea126ae27dd896f46d7"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288548 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" event={"ID":"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef","Type":"ContainerStarted","Data":"6d7a71c53242fcf62e136045d6262d60179ab6d7c9f178bb1105c59863455c6d"} Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.288565 4861 scope.go:117] "RemoveContainer" containerID="f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.289768 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.291992 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.345460 4861 scope.go:117] "RemoveContainer" containerID="fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.445235 4861 scope.go:117] "RemoveContainer" containerID="f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.445311 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-4bvnr"] Feb 19 13:30:32 crc kubenswrapper[4861]: E0219 13:30:32.447953 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122\": container with ID starting with f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122 not found: ID does not exist" containerID="f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.448005 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122"} err="failed to get container status \"f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122\": rpc error: code = NotFound desc = could not find container \"f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122\": container with ID starting with f73e324d181ee60400307f66ffe663bbf7497ebdf4a39d8b07581334324a9122 not found: ID does not exist" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.448042 4861 scope.go:117] "RemoveContainer" containerID="fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894" Feb 19 13:30:32 crc kubenswrapper[4861]: E0219 13:30:32.448399 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894\": container with ID starting with fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894 not found: ID does not exist" containerID="fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.448438 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894"} err="failed to get container status \"fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894\": rpc error: code = NotFound desc = could not find container \"fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894\": container with ID starting with fd2abce8445249833d6f60bf2d66bec27c14c5f1ecc666e03fa81d07416ed894 not found: ID does not exist" Feb 19 13:30:32 crc kubenswrapper[4861]: I0219 13:30:32.469468 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-4bvnr"] Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.075642 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.135195 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.276210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b99dd6c84-fwhqt" event={"ID":"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0","Type":"ContainerStarted","Data":"4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0"} Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.276263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b99dd6c84-fwhqt" event={"ID":"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0","Type":"ContainerStarted","Data":"2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f"} Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.277374 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.277405 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.292464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" event={"ID":"ed6f0ad1-4261-4369-9270-bb6f2aabc68a","Type":"ContainerStarted","Data":"36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142"} Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.292585 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.313555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6799fd8d6-p6tpl" event={"ID":"46d0ac5c-1d20-4b80-be1b-21ad2641b215","Type":"ContainerStarted","Data":"a6147d4413d0fab04021a29f6c8ca99f658d6f9b5f9f258fb48c889b282281d7"} Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.313620 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6799fd8d6-p6tpl" event={"ID":"46d0ac5c-1d20-4b80-be1b-21ad2641b215","Type":"ContainerStarted","Data":"17fcee271c2a499b801142f1f8bd906a26d2c54f3ca073b5f9002a5871100c7a"} Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.314548 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.314580 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.315025 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.315949 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b99dd6c84-fwhqt" podStartSLOduration=4.3159301469999996 podStartE2EDuration="4.315930147s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:33.301400796 +0000 UTC m=+1247.962504024" watchObservedRunningTime="2026-02-19 13:30:33.315930147 +0000 UTC m=+1247.977033375" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.332065 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" podStartSLOduration=4.332043951 podStartE2EDuration="4.332043951s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:33.323859731 +0000 UTC m=+1247.984962979" watchObservedRunningTime="2026-02-19 13:30:33.332043951 +0000 UTC m=+1247.993147179" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.349191 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6799fd8d6-p6tpl" podStartSLOduration=4.349168862 podStartE2EDuration="4.349168862s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:33.345350859 +0000 UTC m=+1248.006454087" watchObservedRunningTime="2026-02-19 13:30:33.349168862 +0000 UTC m=+1248.010272090" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.740605 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d6c467cc6-ng4wh"] Feb 19 13:30:33 crc kubenswrapper[4861]: E0219 13:30:33.741413 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerName="dnsmasq-dns" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.741447 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerName="dnsmasq-dns" Feb 19 13:30:33 crc kubenswrapper[4861]: E0219 13:30:33.741471 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerName="init" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.741479 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerName="init" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.741733 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" containerName="dnsmasq-dns" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.743909 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.746245 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.747138 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.762010 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d6c467cc6-ng4wh"] Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.838007 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.838065 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.889844 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-public-tls-certs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.889984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data-custom\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.890027 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074e719c-b46b-4f91-ae2d-e7f30368a8ae-logs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.890064 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-internal-tls-certs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.890116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqp7\" (UniqueName: \"kubernetes.io/projected/074e719c-b46b-4f91-ae2d-e7f30368a8ae-kube-api-access-cpqp7\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.890132 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-combined-ca-bundle\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.890223 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.990657 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e268516-d31c-4e80-884b-3fc40f8ab3d8" path="/var/lib/kubelet/pods/1e268516-d31c-4e80-884b-3fc40f8ab3d8/volumes" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.992359 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074e719c-b46b-4f91-ae2d-e7f30368a8ae-logs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.992546 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-internal-tls-certs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.992661 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqp7\" (UniqueName: \"kubernetes.io/projected/074e719c-b46b-4f91-ae2d-e7f30368a8ae-kube-api-access-cpqp7\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.992713 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-combined-ca-bundle\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.992878 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.992951 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-public-tls-certs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.993169 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data-custom\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:33 crc kubenswrapper[4861]: I0219 13:30:33.995683 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074e719c-b46b-4f91-ae2d-e7f30368a8ae-logs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.003982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data-custom\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.005500 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-public-tls-certs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.020181 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-internal-tls-certs\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.023921 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-combined-ca-bundle\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.026791 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqp7\" (UniqueName: \"kubernetes.io/projected/074e719c-b46b-4f91-ae2d-e7f30368a8ae-kube-api-access-cpqp7\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.027471 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data\") pod \"barbican-api-d6c467cc6-ng4wh\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.116252 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.326161 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.797005 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d6c467cc6-ng4wh"] Feb 19 13:30:34 crc kubenswrapper[4861]: I0219 13:30:34.946958 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.342090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" event={"ID":"05baff31-913d-47c0-92c1-0ec2085039ba","Type":"ContainerStarted","Data":"a2cf8b339890cbb8ae2b52114bddde477989b368643198fedcbe39f4d710eac0"} Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.345638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" event={"ID":"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef","Type":"ContainerStarted","Data":"5e5b1d9b0913f678bfeffc302d49785832edca29f1e96866dc26ab6c9f4872d5"} Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.356354 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d6c467cc6-ng4wh" event={"ID":"074e719c-b46b-4f91-ae2d-e7f30368a8ae","Type":"ContainerStarted","Data":"776ff5a17e90ebc21d49721478bc41e7146bdd38de85dd86200078fc345273f3"} Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.356396 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d6c467cc6-ng4wh" event={"ID":"074e719c-b46b-4f91-ae2d-e7f30368a8ae","Type":"ContainerStarted","Data":"90fc7f8f56b065d37a3d8b95fed12c8241808ee1000d5c6034774c368d36148a"} Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.359993 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" event={"ID":"a4307ff9-78bb-48ec-8096-6e06ff22e19b","Type":"ContainerStarted","Data":"53911b7e6ee036738f82d06e28457c9efb4b7e608b7a20ad34bd125adf651646"} Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.365650 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c66cf485-7rs8h" event={"ID":"62993e1b-6031-4438-b32a-c0d721d4870d","Type":"ContainerStarted","Data":"0c560ae3f3663266eadd86c691e47c239911f118809b5d64249fa836e184e7b6"} Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.365745 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:30:35 crc kubenswrapper[4861]: I0219 13:30:35.925456 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.386026 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" event={"ID":"05baff31-913d-47c0-92c1-0ec2085039ba","Type":"ContainerStarted","Data":"d2db89610e2d780332bb65e5dfc40f9896b0ec4bf916b9c971555da24ad5c6a7"} Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.390260 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" event={"ID":"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef","Type":"ContainerStarted","Data":"1d619313cf3eb9116f4f061ab19e9d256b6f4c3706035768630ec087a8ab9bd7"} Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.395701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d6c467cc6-ng4wh" event={"ID":"074e719c-b46b-4f91-ae2d-e7f30368a8ae","Type":"ContainerStarted","Data":"851c26a783d4f2fb239877063c2d4732d081998faf87a9a0897c6af79d389cda"} Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.396758 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.396791 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.410459 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" event={"ID":"a4307ff9-78bb-48ec-8096-6e06ff22e19b","Type":"ContainerStarted","Data":"fc0693a9e1476f2b6d033af8f56ce772a2fac61eb55bf48764b0906664653ac4"} Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.411701 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" podStartSLOduration=3.99039782 podStartE2EDuration="7.411681776s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="2026-02-19 13:30:30.727090389 +0000 UTC m=+1245.388193617" lastFinishedPulling="2026-02-19 13:30:34.148374345 +0000 UTC m=+1248.809477573" observedRunningTime="2026-02-19 13:30:36.405081718 +0000 UTC m=+1251.066184946" watchObservedRunningTime="2026-02-19 13:30:36.411681776 +0000 UTC m=+1251.072785004" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.434351 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c66cf485-7rs8h" event={"ID":"62993e1b-6031-4438-b32a-c0d721d4870d","Type":"ContainerStarted","Data":"4855fa7c43e4f9dfdd4e777e053cd3d6e2c5aab0ca1dc100e0dc0a778340f493"} Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.446496 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d6c467cc6-ng4wh" podStartSLOduration=3.446475693 podStartE2EDuration="3.446475693s" podCreationTimestamp="2026-02-19 13:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:36.440364549 +0000 UTC m=+1251.101467777" watchObservedRunningTime="2026-02-19 13:30:36.446475693 +0000 UTC m=+1251.107578921" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.474047 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" podStartSLOduration=4.407940604 podStartE2EDuration="7.474031705s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="2026-02-19 13:30:31.064034893 +0000 UTC m=+1245.725138131" lastFinishedPulling="2026-02-19 13:30:34.130126004 +0000 UTC m=+1248.791229232" observedRunningTime="2026-02-19 13:30:36.468219349 +0000 UTC m=+1251.129322577" watchObservedRunningTime="2026-02-19 13:30:36.474031705 +0000 UTC m=+1251.135134933" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.519407 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-697dc9f75d-t2chc"] Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.524277 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" podStartSLOduration=4.482805041 podStartE2EDuration="7.524255808s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="2026-02-19 13:30:31.125083977 +0000 UTC m=+1245.786187205" lastFinishedPulling="2026-02-19 13:30:34.166534744 +0000 UTC m=+1248.827637972" observedRunningTime="2026-02-19 13:30:36.506687014 +0000 UTC m=+1251.167790232" watchObservedRunningTime="2026-02-19 13:30:36.524255808 +0000 UTC m=+1251.185359036" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.540851 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68c66cf485-7rs8h" podStartSLOduration=4.268296055 podStartE2EDuration="7.540806193s" podCreationTimestamp="2026-02-19 13:30:29 +0000 UTC" firstStartedPulling="2026-02-19 13:30:30.895703381 +0000 UTC m=+1245.556806609" lastFinishedPulling="2026-02-19 13:30:34.168213519 +0000 UTC m=+1248.829316747" observedRunningTime="2026-02-19 13:30:36.535159161 +0000 UTC m=+1251.196262389" watchObservedRunningTime="2026-02-19 13:30:36.540806193 +0000 UTC m=+1251.201909421" Feb 19 13:30:36 crc kubenswrapper[4861]: I0219 13:30:36.566299 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-68c66cf485-7rs8h"] Feb 19 13:30:37 crc kubenswrapper[4861]: I0219 13:30:37.456151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zc7qs" event={"ID":"cfab12c1-cdb5-415f-8290-4d057a940b1a","Type":"ContainerStarted","Data":"2ebd568b5fee8d92b4bf41bd5662b65019c7fa35b6090be14d419e69dd312ad2"} Feb 19 13:30:37 crc kubenswrapper[4861]: I0219 13:30:37.480795 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-zc7qs" podStartSLOduration=3.421369063 podStartE2EDuration="39.480778237s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="2026-02-19 13:29:59.680768189 +0000 UTC m=+1214.341871417" lastFinishedPulling="2026-02-19 13:30:35.740177363 +0000 UTC m=+1250.401280591" observedRunningTime="2026-02-19 13:30:37.473180512 +0000 UTC m=+1252.134283740" watchObservedRunningTime="2026-02-19 13:30:37.480778237 +0000 UTC m=+1252.141881465" Feb 19 13:30:38 crc kubenswrapper[4861]: I0219 13:30:38.466486 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener-log" containerID="cri-o://a2cf8b339890cbb8ae2b52114bddde477989b368643198fedcbe39f4d710eac0" gracePeriod=30 Feb 19 13:30:38 crc kubenswrapper[4861]: I0219 13:30:38.466558 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener" containerID="cri-o://d2db89610e2d780332bb65e5dfc40f9896b0ec4bf916b9c971555da24ad5c6a7" gracePeriod=30 Feb 19 13:30:38 crc kubenswrapper[4861]: I0219 13:30:38.466826 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-68c66cf485-7rs8h" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker" containerID="cri-o://4855fa7c43e4f9dfdd4e777e053cd3d6e2c5aab0ca1dc100e0dc0a778340f493" gracePeriod=30 Feb 19 13:30:38 crc kubenswrapper[4861]: I0219 13:30:38.466823 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-68c66cf485-7rs8h" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker-log" containerID="cri-o://0c560ae3f3663266eadd86c691e47c239911f118809b5d64249fa836e184e7b6" gracePeriod=30 Feb 19 13:30:39 crc kubenswrapper[4861]: I0219 13:30:39.476369 4861 generic.go:334] "Generic (PLEG): container finished" podID="62993e1b-6031-4438-b32a-c0d721d4870d" containerID="0c560ae3f3663266eadd86c691e47c239911f118809b5d64249fa836e184e7b6" exitCode=143 Feb 19 13:30:39 crc kubenswrapper[4861]: I0219 13:30:39.476572 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c66cf485-7rs8h" event={"ID":"62993e1b-6031-4438-b32a-c0d721d4870d","Type":"ContainerDied","Data":"0c560ae3f3663266eadd86c691e47c239911f118809b5d64249fa836e184e7b6"} Feb 19 13:30:39 crc kubenswrapper[4861]: I0219 13:30:39.478360 4861 generic.go:334] "Generic (PLEG): container finished" podID="05baff31-913d-47c0-92c1-0ec2085039ba" containerID="a2cf8b339890cbb8ae2b52114bddde477989b368643198fedcbe39f4d710eac0" exitCode=143 Feb 19 13:30:39 crc kubenswrapper[4861]: I0219 13:30:39.478391 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" event={"ID":"05baff31-913d-47c0-92c1-0ec2085039ba","Type":"ContainerDied","Data":"a2cf8b339890cbb8ae2b52114bddde477989b368643198fedcbe39f4d710eac0"} Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.500942 4861 generic.go:334] "Generic (PLEG): container finished" podID="62993e1b-6031-4438-b32a-c0d721d4870d" containerID="4855fa7c43e4f9dfdd4e777e053cd3d6e2c5aab0ca1dc100e0dc0a778340f493" exitCode=0 Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.501012 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c66cf485-7rs8h" event={"ID":"62993e1b-6031-4438-b32a-c0d721d4870d","Type":"ContainerDied","Data":"4855fa7c43e4f9dfdd4e777e053cd3d6e2c5aab0ca1dc100e0dc0a778340f493"} Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.508597 4861 generic.go:334] "Generic (PLEG): container finished" podID="05baff31-913d-47c0-92c1-0ec2085039ba" containerID="d2db89610e2d780332bb65e5dfc40f9896b0ec4bf916b9c971555da24ad5c6a7" exitCode=0 Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.508648 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" event={"ID":"05baff31-913d-47c0-92c1-0ec2085039ba","Type":"ContainerDied","Data":"d2db89610e2d780332bb65e5dfc40f9896b0ec4bf916b9c971555da24ad5c6a7"} Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.547666 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.602775 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-sx7qd"] Feb 19 13:30:40 crc kubenswrapper[4861]: I0219 13:30:40.603018 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerName="dnsmasq-dns" containerID="cri-o://1568dacd1144b4f34ab30e12d3fb890b1f9d1c56ab77ff2a6316b084d0bb7730" gracePeriod=10 Feb 19 13:30:41 crc kubenswrapper[4861]: I0219 13:30:41.539030 4861 generic.go:334] "Generic (PLEG): container finished" podID="cfab12c1-cdb5-415f-8290-4d057a940b1a" containerID="2ebd568b5fee8d92b4bf41bd5662b65019c7fa35b6090be14d419e69dd312ad2" exitCode=0 Feb 19 13:30:41 crc kubenswrapper[4861]: I0219 13:30:41.539288 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zc7qs" event={"ID":"cfab12c1-cdb5-415f-8290-4d057a940b1a","Type":"ContainerDied","Data":"2ebd568b5fee8d92b4bf41bd5662b65019c7fa35b6090be14d419e69dd312ad2"} Feb 19 13:30:41 crc kubenswrapper[4861]: I0219 13:30:41.548366 4861 generic.go:334] "Generic (PLEG): container finished" podID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerID="1568dacd1144b4f34ab30e12d3fb890b1f9d1c56ab77ff2a6316b084d0bb7730" exitCode=0 Feb 19 13:30:41 crc kubenswrapper[4861]: I0219 13:30:41.548421 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" event={"ID":"9a1a6b8e-dff3-4107-8742-33a404b1b737","Type":"ContainerDied","Data":"1568dacd1144b4f34ab30e12d3fb890b1f9d1c56ab77ff2a6316b084d0bb7730"} Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.260882 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.433203 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.469013 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.496122 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data\") pod \"05baff31-913d-47c0-92c1-0ec2085039ba\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.496181 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-combined-ca-bundle\") pod \"05baff31-913d-47c0-92c1-0ec2085039ba\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.496227 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqvqp\" (UniqueName: \"kubernetes.io/projected/05baff31-913d-47c0-92c1-0ec2085039ba-kube-api-access-tqvqp\") pod \"05baff31-913d-47c0-92c1-0ec2085039ba\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.496252 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05baff31-913d-47c0-92c1-0ec2085039ba-logs\") pod \"05baff31-913d-47c0-92c1-0ec2085039ba\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.496283 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data-custom\") pod \"05baff31-913d-47c0-92c1-0ec2085039ba\" (UID: \"05baff31-913d-47c0-92c1-0ec2085039ba\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.498795 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05baff31-913d-47c0-92c1-0ec2085039ba-logs" (OuterVolumeSpecName: "logs") pod "05baff31-913d-47c0-92c1-0ec2085039ba" (UID: "05baff31-913d-47c0-92c1-0ec2085039ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.500680 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "05baff31-913d-47c0-92c1-0ec2085039ba" (UID: "05baff31-913d-47c0-92c1-0ec2085039ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.503995 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05baff31-913d-47c0-92c1-0ec2085039ba-kube-api-access-tqvqp" (OuterVolumeSpecName: "kube-api-access-tqvqp") pod "05baff31-913d-47c0-92c1-0ec2085039ba" (UID: "05baff31-913d-47c0-92c1-0ec2085039ba"). InnerVolumeSpecName "kube-api-access-tqvqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.543494 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05baff31-913d-47c0-92c1-0ec2085039ba" (UID: "05baff31-913d-47c0-92c1-0ec2085039ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.558581 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" event={"ID":"05baff31-913d-47c0-92c1-0ec2085039ba","Type":"ContainerDied","Data":"b52fef68e7d46ccbfdc0130e9d08b4fe1e45b565aa64803989730117222c8da5"} Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.558633 4861 scope.go:117] "RemoveContainer" containerID="d2db89610e2d780332bb65e5dfc40f9896b0ec4bf916b9c971555da24ad5c6a7" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.558740 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-697dc9f75d-t2chc" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.564265 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data" (OuterVolumeSpecName: "config-data") pod "05baff31-913d-47c0-92c1-0ec2085039ba" (UID: "05baff31-913d-47c0-92c1-0ec2085039ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.566315 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-central-agent" containerID="cri-o://7e849cd5e5366538157296d1d75c1308f3ed372130d98238cf173c54acc5b242" gracePeriod=30 Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.566584 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerStarted","Data":"8496a003cbf7c04c59fcce77fc996ee8f099dbeda93aff2ceacf2742e81a4b0f"} Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.567920 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.568173 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="proxy-httpd" containerID="cri-o://8496a003cbf7c04c59fcce77fc996ee8f099dbeda93aff2ceacf2742e81a4b0f" gracePeriod=30 Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.568239 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="sg-core" containerID="cri-o://4b66180c1790c564b6ebbc0a6a6ada0b48520072f168d8e7342eb13c56b83f78" gracePeriod=30 Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.568273 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-notification-agent" containerID="cri-o://563e6034bf68d39843ed6d4c780be5dadc7fac915cb11be14284cd0c542973b1" gracePeriod=30 Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.594864 4861 scope.go:117] "RemoveContainer" containerID="a2cf8b339890cbb8ae2b52114bddde477989b368643198fedcbe39f4d710eac0" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.597839 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.597863 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.597871 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqvqp\" (UniqueName: \"kubernetes.io/projected/05baff31-913d-47c0-92c1-0ec2085039ba-kube-api-access-tqvqp\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.597882 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05baff31-913d-47c0-92c1-0ec2085039ba-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.597891 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05baff31-913d-47c0-92c1-0ec2085039ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.615742 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5917468919999997 podStartE2EDuration="44.615714911s" podCreationTimestamp="2026-02-19 13:29:58 +0000 UTC" firstStartedPulling="2026-02-19 13:30:00.202941451 +0000 UTC m=+1214.864044679" lastFinishedPulling="2026-02-19 13:30:42.22690947 +0000 UTC m=+1256.888012698" observedRunningTime="2026-02-19 13:30:42.603527392 +0000 UTC m=+1257.264630640" watchObservedRunningTime="2026-02-19 13:30:42.615714911 +0000 UTC m=+1257.276818139" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.682215 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.716554 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.806355 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-swift-storage-0\") pod \"9a1a6b8e-dff3-4107-8742-33a404b1b737\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.806485 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qv7k\" (UniqueName: \"kubernetes.io/projected/9a1a6b8e-dff3-4107-8742-33a404b1b737-kube-api-access-8qv7k\") pod \"9a1a6b8e-dff3-4107-8742-33a404b1b737\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.806549 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-svc\") pod \"9a1a6b8e-dff3-4107-8742-33a404b1b737\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.806601 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-nb\") pod \"9a1a6b8e-dff3-4107-8742-33a404b1b737\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.806633 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-sb\") pod \"9a1a6b8e-dff3-4107-8742-33a404b1b737\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.806657 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-config\") pod \"9a1a6b8e-dff3-4107-8742-33a404b1b737\" (UID: \"9a1a6b8e-dff3-4107-8742-33a404b1b737\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.818565 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1a6b8e-dff3-4107-8742-33a404b1b737-kube-api-access-8qv7k" (OuterVolumeSpecName: "kube-api-access-8qv7k") pod "9a1a6b8e-dff3-4107-8742-33a404b1b737" (UID: "9a1a6b8e-dff3-4107-8742-33a404b1b737"). InnerVolumeSpecName "kube-api-access-8qv7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.865389 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a1a6b8e-dff3-4107-8742-33a404b1b737" (UID: "9a1a6b8e-dff3-4107-8742-33a404b1b737"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.889011 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9a1a6b8e-dff3-4107-8742-33a404b1b737" (UID: "9a1a6b8e-dff3-4107-8742-33a404b1b737"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.890363 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a1a6b8e-dff3-4107-8742-33a404b1b737" (UID: "9a1a6b8e-dff3-4107-8742-33a404b1b737"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.894099 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a1a6b8e-dff3-4107-8742-33a404b1b737" (UID: "9a1a6b8e-dff3-4107-8742-33a404b1b737"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.906211 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-config" (OuterVolumeSpecName: "config") pod "9a1a6b8e-dff3-4107-8742-33a404b1b737" (UID: "9a1a6b8e-dff3-4107-8742-33a404b1b737"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.908300 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data-custom\") pod \"62993e1b-6031-4438-b32a-c0d721d4870d\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.908404 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data\") pod \"62993e1b-6031-4438-b32a-c0d721d4870d\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.908486 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-combined-ca-bundle\") pod \"62993e1b-6031-4438-b32a-c0d721d4870d\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.908507 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p22sj\" (UniqueName: \"kubernetes.io/projected/62993e1b-6031-4438-b32a-c0d721d4870d-kube-api-access-p22sj\") pod \"62993e1b-6031-4438-b32a-c0d721d4870d\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.908545 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62993e1b-6031-4438-b32a-c0d721d4870d-logs\") pod \"62993e1b-6031-4438-b32a-c0d721d4870d\" (UID: \"62993e1b-6031-4438-b32a-c0d721d4870d\") " Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909006 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qv7k\" (UniqueName: \"kubernetes.io/projected/9a1a6b8e-dff3-4107-8742-33a404b1b737-kube-api-access-8qv7k\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909021 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909031 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909041 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909053 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909062 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a1a6b8e-dff3-4107-8742-33a404b1b737-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.909478 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62993e1b-6031-4438-b32a-c0d721d4870d-logs" (OuterVolumeSpecName: "logs") pod "62993e1b-6031-4438-b32a-c0d721d4870d" (UID: "62993e1b-6031-4438-b32a-c0d721d4870d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.911599 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "62993e1b-6031-4438-b32a-c0d721d4870d" (UID: "62993e1b-6031-4438-b32a-c0d721d4870d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.913110 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-697dc9f75d-t2chc"] Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.914590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62993e1b-6031-4438-b32a-c0d721d4870d-kube-api-access-p22sj" (OuterVolumeSpecName: "kube-api-access-p22sj") pod "62993e1b-6031-4438-b32a-c0d721d4870d" (UID: "62993e1b-6031-4438-b32a-c0d721d4870d"). InnerVolumeSpecName "kube-api-access-p22sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.918250 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-697dc9f75d-t2chc"] Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.955864 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62993e1b-6031-4438-b32a-c0d721d4870d" (UID: "62993e1b-6031-4438-b32a-c0d721d4870d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.958790 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data" (OuterVolumeSpecName: "config-data") pod "62993e1b-6031-4438-b32a-c0d721d4870d" (UID: "62993e1b-6031-4438-b32a-c0d721d4870d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:42 crc kubenswrapper[4861]: I0219 13:30:42.966875 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.015942 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.015975 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.015986 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p22sj\" (UniqueName: \"kubernetes.io/projected/62993e1b-6031-4438-b32a-c0d721d4870d-kube-api-access-p22sj\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.015995 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62993e1b-6031-4438-b32a-c0d721d4870d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.016003 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62993e1b-6031-4438-b32a-c0d721d4870d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.117661 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-db-sync-config-data\") pod \"cfab12c1-cdb5-415f-8290-4d057a940b1a\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118009 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qphzw\" (UniqueName: \"kubernetes.io/projected/cfab12c1-cdb5-415f-8290-4d057a940b1a-kube-api-access-qphzw\") pod \"cfab12c1-cdb5-415f-8290-4d057a940b1a\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118037 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cfab12c1-cdb5-415f-8290-4d057a940b1a-etc-machine-id\") pod \"cfab12c1-cdb5-415f-8290-4d057a940b1a\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118085 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-scripts\") pod \"cfab12c1-cdb5-415f-8290-4d057a940b1a\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118215 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-combined-ca-bundle\") pod \"cfab12c1-cdb5-415f-8290-4d057a940b1a\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118243 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-config-data\") pod \"cfab12c1-cdb5-415f-8290-4d057a940b1a\" (UID: \"cfab12c1-cdb5-415f-8290-4d057a940b1a\") " Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118361 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfab12c1-cdb5-415f-8290-4d057a940b1a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cfab12c1-cdb5-415f-8290-4d057a940b1a" (UID: "cfab12c1-cdb5-415f-8290-4d057a940b1a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.118748 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cfab12c1-cdb5-415f-8290-4d057a940b1a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.123680 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cfab12c1-cdb5-415f-8290-4d057a940b1a" (UID: "cfab12c1-cdb5-415f-8290-4d057a940b1a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.123717 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-scripts" (OuterVolumeSpecName: "scripts") pod "cfab12c1-cdb5-415f-8290-4d057a940b1a" (UID: "cfab12c1-cdb5-415f-8290-4d057a940b1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.128707 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfab12c1-cdb5-415f-8290-4d057a940b1a-kube-api-access-qphzw" (OuterVolumeSpecName: "kube-api-access-qphzw") pod "cfab12c1-cdb5-415f-8290-4d057a940b1a" (UID: "cfab12c1-cdb5-415f-8290-4d057a940b1a"). InnerVolumeSpecName "kube-api-access-qphzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.158363 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfab12c1-cdb5-415f-8290-4d057a940b1a" (UID: "cfab12c1-cdb5-415f-8290-4d057a940b1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.178643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-config-data" (OuterVolumeSpecName: "config-data") pod "cfab12c1-cdb5-415f-8290-4d057a940b1a" (UID: "cfab12c1-cdb5-415f-8290-4d057a940b1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.220792 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.220845 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.220864 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.220882 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qphzw\" (UniqueName: \"kubernetes.io/projected/cfab12c1-cdb5-415f-8290-4d057a940b1a-kube-api-access-qphzw\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.220901 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfab12c1-cdb5-415f-8290-4d057a940b1a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.581520 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c66cf485-7rs8h" event={"ID":"62993e1b-6031-4438-b32a-c0d721d4870d","Type":"ContainerDied","Data":"da3e479e4b8fd6b70ffeedd244dc885a9bc44a43869c5087ca175323d8d30f20"} Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.581577 4861 scope.go:117] "RemoveContainer" containerID="4855fa7c43e4f9dfdd4e777e053cd3d6e2c5aab0ca1dc100e0dc0a778340f493" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.581673 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c66cf485-7rs8h" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.646746 4861 generic.go:334] "Generic (PLEG): container finished" podID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerID="8496a003cbf7c04c59fcce77fc996ee8f099dbeda93aff2ceacf2742e81a4b0f" exitCode=0 Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.646791 4861 generic.go:334] "Generic (PLEG): container finished" podID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerID="4b66180c1790c564b6ebbc0a6a6ada0b48520072f168d8e7342eb13c56b83f78" exitCode=2 Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.646804 4861 generic.go:334] "Generic (PLEG): container finished" podID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerID="7e849cd5e5366538157296d1d75c1308f3ed372130d98238cf173c54acc5b242" exitCode=0 Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.646893 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerDied","Data":"8496a003cbf7c04c59fcce77fc996ee8f099dbeda93aff2ceacf2742e81a4b0f"} Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.646932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerDied","Data":"4b66180c1790c564b6ebbc0a6a6ada0b48520072f168d8e7342eb13c56b83f78"} Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.646947 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerDied","Data":"7e849cd5e5366538157296d1d75c1308f3ed372130d98238cf173c54acc5b242"} Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.647061 4861 scope.go:117] "RemoveContainer" containerID="0c560ae3f3663266eadd86c691e47c239911f118809b5d64249fa836e184e7b6" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.650793 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" event={"ID":"9a1a6b8e-dff3-4107-8742-33a404b1b737","Type":"ContainerDied","Data":"cebadd93439d33aa97930921e35382236b63cd6cd6c05a21fef5f9718b2f9131"} Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.650827 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-sx7qd" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.652309 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zc7qs" event={"ID":"cfab12c1-cdb5-415f-8290-4d057a940b1a","Type":"ContainerDied","Data":"35fe176f399f7d66e30e462ef4e30ddebba4b1298804b0134aa5d6226f0408e7"} Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.652336 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35fe176f399f7d66e30e462ef4e30ddebba4b1298804b0134aa5d6226f0408e7" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.652391 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zc7qs" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.661842 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-68c66cf485-7rs8h"] Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.689073 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-68c66cf485-7rs8h"] Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.700089 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-sx7qd"] Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.710062 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-sx7qd"] Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.715763 4861 scope.go:117] "RemoveContainer" containerID="1568dacd1144b4f34ab30e12d3fb890b1f9d1c56ab77ff2a6316b084d0bb7730" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.758550 4861 scope.go:117] "RemoveContainer" containerID="45531f344cc2dfeacacd52d4bea2758eac9c1bdd2dd3bef282ac64096bfe5f59" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.906310 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.906975 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerName="dnsmasq-dns" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.906997 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerName="dnsmasq-dns" Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.907018 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerName="init" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907026 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerName="init" Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.907042 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfab12c1-cdb5-415f-8290-4d057a940b1a" containerName="cinder-db-sync" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907050 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfab12c1-cdb5-415f-8290-4d057a940b1a" containerName="cinder-db-sync" Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.907073 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker-log" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907081 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker-log" Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.907093 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907102 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener" Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.907115 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907123 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker" Feb 19 13:30:43 crc kubenswrapper[4861]: E0219 13:30:43.907136 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener-log" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907144 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener-log" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907340 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfab12c1-cdb5-415f-8290-4d057a940b1a" containerName="cinder-db-sync" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907360 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907376 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" containerName="dnsmasq-dns" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907386 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.907406 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" containerName="barbican-keystone-listener-log" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.909558 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" containerName="barbican-worker-log" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.910727 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.920831 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.921018 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-npmhq" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.921151 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.921263 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 13:30:43 crc kubenswrapper[4861]: I0219 13:30:43.933062 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.004302 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05baff31-913d-47c0-92c1-0ec2085039ba" path="/var/lib/kubelet/pods/05baff31-913d-47c0-92c1-0ec2085039ba/volumes" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.005216 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62993e1b-6031-4438-b32a-c0d721d4870d" path="/var/lib/kubelet/pods/62993e1b-6031-4438-b32a-c0d721d4870d/volumes" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.005926 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a1a6b8e-dff3-4107-8742-33a404b1b737" path="/var/lib/kubelet/pods/9a1a6b8e-dff3-4107-8742-33a404b1b737/volumes" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.040832 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fe74cbc-610e-4331-87db-10858d03af8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.040901 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.040936 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.041030 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.041060 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.041115 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjgb\" (UniqueName: \"kubernetes.io/projected/8fe74cbc-610e-4331-87db-10858d03af8d-kube-api-access-npjgb\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.054346 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-gz67q"] Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.055790 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.072381 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-gz67q"] Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.143943 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjgb\" (UniqueName: \"kubernetes.io/projected/8fe74cbc-610e-4331-87db-10858d03af8d-kube-api-access-npjgb\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144023 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144096 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fe74cbc-610e-4331-87db-10858d03af8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144121 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144184 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144215 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144235 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144253 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w726h\" (UniqueName: \"kubernetes.io/projected/95cf9059-55e3-41b0-8ff3-5cd158fcd643-kube-api-access-w726h\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144274 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.144315 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-config\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.146045 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fe74cbc-610e-4331-87db-10858d03af8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.151205 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.151749 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.161017 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.161665 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.178187 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.179953 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.184253 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.184963 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjgb\" (UniqueName: \"kubernetes.io/projected/8fe74cbc-610e-4331-87db-10858d03af8d-kube-api-access-npjgb\") pod \"cinder-scheduler-0\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.218213 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.225496 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.246833 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.246883 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.246912 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w726h\" (UniqueName: \"kubernetes.io/projected/95cf9059-55e3-41b0-8ff3-5cd158fcd643-kube-api-access-w726h\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.246933 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.246962 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-config\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.247009 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.247920 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.248474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.248963 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.249715 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.250338 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-config\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.278148 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w726h\" (UniqueName: \"kubernetes.io/projected/95cf9059-55e3-41b0-8ff3-5cd158fcd643-kube-api-access-w726h\") pod \"dnsmasq-dns-6c69c79c7f-gz67q\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348449 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-logs\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348588 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-scripts\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348657 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qv5\" (UniqueName: \"kubernetes.io/projected/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-kube-api-access-69qv5\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348709 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.348810 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.385640 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450125 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450193 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450253 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-logs\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450286 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-scripts\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450333 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qv5\" (UniqueName: \"kubernetes.io/projected/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-kube-api-access-69qv5\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450367 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.450752 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.453194 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-logs\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.456307 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.456535 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.456685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-scripts\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.461778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.470227 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qv5\" (UniqueName: \"kubernetes.io/projected/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-kube-api-access-69qv5\") pod \"cinder-api-0\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " pod="openstack/cinder-api-0" Feb 19 13:30:44 crc kubenswrapper[4861]: I0219 13:30:44.547236 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.598849 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-gz67q"] Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.605143 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.690732 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" event={"ID":"95cf9059-55e3-41b0-8ff3-5cd158fcd643","Type":"ContainerStarted","Data":"bcd6616506ae055b118485b1da843b97c899b6de8d0a865d7015e84b2cc6deb8"} Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.691453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fe74cbc-610e-4331-87db-10858d03af8d","Type":"ContainerStarted","Data":"70c667bbdc323d489993f680848e7e6b785fea20e920a01ab4b00d33aeb8617a"} Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.693681 4861 generic.go:334] "Generic (PLEG): container finished" podID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerID="563e6034bf68d39843ed6d4c780be5dadc7fac915cb11be14284cd0c542973b1" exitCode=0 Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.693723 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerDied","Data":"563e6034bf68d39843ed6d4c780be5dadc7fac915cb11be14284cd0c542973b1"} Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.723100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.874449 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.905171 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989159 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-sg-core-conf-yaml\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989305 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvssf\" (UniqueName: \"kubernetes.io/projected/afbe9acc-8cc5-48b6-9515-61da01b73fcd-kube-api-access-pvssf\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989457 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-scripts\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989486 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-log-httpd\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989520 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-combined-ca-bundle\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989555 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-config-data\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.989604 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-run-httpd\") pod \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\" (UID: \"afbe9acc-8cc5-48b6-9515-61da01b73fcd\") " Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.990771 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.991074 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.992690 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.992722 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbe9acc-8cc5-48b6-9515-61da01b73fcd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.996623 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbe9acc-8cc5-48b6-9515-61da01b73fcd-kube-api-access-pvssf" (OuterVolumeSpecName: "kube-api-access-pvssf") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "kube-api-access-pvssf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:45 crc kubenswrapper[4861]: I0219 13:30:45.996715 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-scripts" (OuterVolumeSpecName: "scripts") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.027039 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.094678 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.094745 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvssf\" (UniqueName: \"kubernetes.io/projected/afbe9acc-8cc5-48b6-9515-61da01b73fcd-kube-api-access-pvssf\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.094762 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.100078 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.157496 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.178196 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-config-data" (OuterVolumeSpecName: "config-data") pod "afbe9acc-8cc5-48b6-9515-61da01b73fcd" (UID: "afbe9acc-8cc5-48b6-9515-61da01b73fcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.196191 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.196223 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbe9acc-8cc5-48b6-9515-61da01b73fcd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.237451 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b99dd6c84-fwhqt"] Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.237914 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b99dd6c84-fwhqt" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api-log" containerID="cri-o://2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f" gracePeriod=30 Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.238185 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b99dd6c84-fwhqt" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api" containerID="cri-o://4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0" gracePeriod=30 Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.247996 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b99dd6c84-fwhqt" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.560206 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.734612 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbe9acc-8cc5-48b6-9515-61da01b73fcd","Type":"ContainerDied","Data":"e87bd16e408a91639af6f8a779565eea69e2e46eb7855b4596226435cad594cc"} Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.734947 4861 scope.go:117] "RemoveContainer" containerID="8496a003cbf7c04c59fcce77fc996ee8f099dbeda93aff2ceacf2742e81a4b0f" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.735056 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.743359 4861 generic.go:334] "Generic (PLEG): container finished" podID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerID="c59f9b01d824664eda29c257ceee5d17f89433d820bd0b0553e1c744bac8b8e6" exitCode=0 Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.743605 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" event={"ID":"95cf9059-55e3-41b0-8ff3-5cd158fcd643","Type":"ContainerDied","Data":"c59f9b01d824664eda29c257ceee5d17f89433d820bd0b0553e1c744bac8b8e6"} Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.753081 4861 generic.go:334] "Generic (PLEG): container finished" podID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerID="2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f" exitCode=143 Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.753142 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b99dd6c84-fwhqt" event={"ID":"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0","Type":"ContainerDied","Data":"2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f"} Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.762861 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a","Type":"ContainerStarted","Data":"cbc496a448fc72cc7cfaf8955211dcef40b645017cd615c9efe322768d3467ea"} Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.814356 4861 scope.go:117] "RemoveContainer" containerID="4b66180c1790c564b6ebbc0a6a6ada0b48520072f168d8e7342eb13c56b83f78" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.832699 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.841584 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859150 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:46 crc kubenswrapper[4861]: E0219 13:30:46.859646 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="proxy-httpd" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859664 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="proxy-httpd" Feb 19 13:30:46 crc kubenswrapper[4861]: E0219 13:30:46.859681 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-central-agent" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859688 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-central-agent" Feb 19 13:30:46 crc kubenswrapper[4861]: E0219 13:30:46.859703 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-notification-agent" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859708 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-notification-agent" Feb 19 13:30:46 crc kubenswrapper[4861]: E0219 13:30:46.859725 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="sg-core" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859731 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="sg-core" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859906 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-notification-agent" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859921 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="sg-core" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859936 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="ceilometer-central-agent" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.859950 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" containerName="proxy-httpd" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.864548 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.866983 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.868306 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.868316 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.899871 4861 scope.go:117] "RemoveContainer" containerID="563e6034bf68d39843ed6d4c780be5dadc7fac915cb11be14284cd0c542973b1" Feb 19 13:30:46 crc kubenswrapper[4861]: I0219 13:30:46.926612 4861 scope.go:117] "RemoveContainer" containerID="7e849cd5e5366538157296d1d75c1308f3ed372130d98238cf173c54acc5b242" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018229 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-scripts\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018314 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018398 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkpg\" (UniqueName: \"kubernetes.io/projected/ee4275ce-a2c5-4129-858f-48409cb928d7-kube-api-access-mjkpg\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.018488 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-config-data\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.120015 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.120135 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-scripts\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.120873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.120542 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.120973 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.121054 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.121179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkpg\" (UniqueName: \"kubernetes.io/projected/ee4275ce-a2c5-4129-858f-48409cb928d7-kube-api-access-mjkpg\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.121749 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-config-data\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.121899 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.125097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-scripts\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.125558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.126804 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-config-data\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.131008 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.136880 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkpg\" (UniqueName: \"kubernetes.io/projected/ee4275ce-a2c5-4129-858f-48409cb928d7-kube-api-access-mjkpg\") pod \"ceilometer-0\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.205319 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.720353 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.805489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerStarted","Data":"c6877cbc3eb0553a8ac2f65907050a2c4622d5c962bde0a9535dcbe0cb66981c"} Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.807138 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a","Type":"ContainerStarted","Data":"b08318244310bc07c2fdac8035eb3cf117c22c2b34666ccdf97ef1a15589fff9"} Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.807161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a","Type":"ContainerStarted","Data":"417d3a4d3982265f6c2a3418f27cd6a183f673fb9f9333334e21c9117e450900"} Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.807282 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api-log" containerID="cri-o://b08318244310bc07c2fdac8035eb3cf117c22c2b34666ccdf97ef1a15589fff9" gracePeriod=30 Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.807500 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.807524 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api" containerID="cri-o://417d3a4d3982265f6c2a3418f27cd6a183f673fb9f9333334e21c9117e450900" gracePeriod=30 Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.850330 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fe74cbc-610e-4331-87db-10858d03af8d","Type":"ContainerStarted","Data":"e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa"} Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.897654 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" event={"ID":"95cf9059-55e3-41b0-8ff3-5cd158fcd643","Type":"ContainerStarted","Data":"a094783e5a900406f74c352eec8e844c71c0716d149a980ae2f21d6759c518a9"} Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.899116 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.922176 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.922140353 podStartE2EDuration="3.922140353s" podCreationTimestamp="2026-02-19 13:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:47.842435457 +0000 UTC m=+1262.503538685" watchObservedRunningTime="2026-02-19 13:30:47.922140353 +0000 UTC m=+1262.583243581" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.925744 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" podStartSLOduration=3.925737041 podStartE2EDuration="3.925737041s" podCreationTimestamp="2026-02-19 13:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:47.919827581 +0000 UTC m=+1262.580930809" watchObservedRunningTime="2026-02-19 13:30:47.925737041 +0000 UTC m=+1262.586840269" Feb 19 13:30:47 crc kubenswrapper[4861]: I0219 13:30:47.989139 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbe9acc-8cc5-48b6-9515-61da01b73fcd" path="/var/lib/kubelet/pods/afbe9acc-8cc5-48b6-9515-61da01b73fcd/volumes" Feb 19 13:30:48 crc kubenswrapper[4861]: I0219 13:30:48.911533 4861 generic.go:334] "Generic (PLEG): container finished" podID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerID="b08318244310bc07c2fdac8035eb3cf117c22c2b34666ccdf97ef1a15589fff9" exitCode=143 Feb 19 13:30:48 crc kubenswrapper[4861]: I0219 13:30:48.911626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a","Type":"ContainerDied","Data":"b08318244310bc07c2fdac8035eb3cf117c22c2b34666ccdf97ef1a15589fff9"} Feb 19 13:30:48 crc kubenswrapper[4861]: I0219 13:30:48.916655 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fe74cbc-610e-4331-87db-10858d03af8d","Type":"ContainerStarted","Data":"cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb"} Feb 19 13:30:48 crc kubenswrapper[4861]: I0219 13:30:48.923309 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerStarted","Data":"0ca383abd9d8d7aa0aaec2c634d66bddee4bcc2143ed5e972f87fad39223bd34"} Feb 19 13:30:48 crc kubenswrapper[4861]: I0219 13:30:48.943952 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.249603922 podStartE2EDuration="5.943935541s" podCreationTimestamp="2026-02-19 13:30:43 +0000 UTC" firstStartedPulling="2026-02-19 13:30:45.619140253 +0000 UTC m=+1260.280243481" lastFinishedPulling="2026-02-19 13:30:46.313471872 +0000 UTC m=+1260.974575100" observedRunningTime="2026-02-19 13:30:48.938990997 +0000 UTC m=+1263.600094225" watchObservedRunningTime="2026-02-19 13:30:48.943935541 +0000 UTC m=+1263.605038769" Feb 19 13:30:49 crc kubenswrapper[4861]: I0219 13:30:49.226244 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 13:30:49 crc kubenswrapper[4861]: I0219 13:30:49.943095 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerStarted","Data":"de88dd247c3060a693e657e9d8814c568ae40cd7210ab38d92b7095268767f67"} Feb 19 13:30:50 crc kubenswrapper[4861]: I0219 13:30:50.921103 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b99dd6c84-fwhqt" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:43516->10.217.0.164:9311: read: connection reset by peer" Feb 19 13:30:50 crc kubenswrapper[4861]: I0219 13:30:50.921804 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b99dd6c84-fwhqt" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:43524->10.217.0.164:9311: read: connection reset by peer" Feb 19 13:30:50 crc kubenswrapper[4861]: I0219 13:30:50.960195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerStarted","Data":"bbc809fb7dfe766f6565a4c36333e9c1f149cd5154595f371e608aa4a060e813"} Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.415025 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.452567 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-combined-ca-bundle\") pod \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.452698 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq8ct\" (UniqueName: \"kubernetes.io/projected/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-kube-api-access-jq8ct\") pod \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.452908 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data-custom\") pod \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.452986 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data\") pod \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.453108 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-logs\") pod \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\" (UID: \"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0\") " Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.454766 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-logs" (OuterVolumeSpecName: "logs") pod "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" (UID: "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.484496 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-kube-api-access-jq8ct" (OuterVolumeSpecName: "kube-api-access-jq8ct") pod "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" (UID: "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0"). InnerVolumeSpecName "kube-api-access-jq8ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.486883 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" (UID: "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.492100 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" (UID: "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.555147 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.555183 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.555197 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.555209 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq8ct\" (UniqueName: \"kubernetes.io/projected/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-kube-api-access-jq8ct\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.556051 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data" (OuterVolumeSpecName: "config-data") pod "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" (UID: "6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.657498 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.994165 4861 generic.go:334] "Generic (PLEG): container finished" podID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerID="4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0" exitCode=0 Feb 19 13:30:51 crc kubenswrapper[4861]: I0219 13:30:51.994312 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b99dd6c84-fwhqt" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.009580 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b99dd6c84-fwhqt" event={"ID":"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0","Type":"ContainerDied","Data":"4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0"} Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.009641 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b99dd6c84-fwhqt" event={"ID":"6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0","Type":"ContainerDied","Data":"b8ac2f7703ceac6d1f6c576350c62154b55095a5fb14103ba7d391ae847e4847"} Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.009670 4861 scope.go:117] "RemoveContainer" containerID="4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.092880 4861 scope.go:117] "RemoveContainer" containerID="2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.121905 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b99dd6c84-fwhqt"] Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.126127 4861 scope.go:117] "RemoveContainer" containerID="4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0" Feb 19 13:30:52 crc kubenswrapper[4861]: E0219 13:30:52.126608 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0\": container with ID starting with 4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0 not found: ID does not exist" containerID="4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.126659 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0"} err="failed to get container status \"4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0\": rpc error: code = NotFound desc = could not find container \"4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0\": container with ID starting with 4b82ebe957f9695e5600d639761e63ec3f73e659615e4708f6fd9df6e83386d0 not found: ID does not exist" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.126685 4861 scope.go:117] "RemoveContainer" containerID="2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f" Feb 19 13:30:52 crc kubenswrapper[4861]: E0219 13:30:52.127015 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f\": container with ID starting with 2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f not found: ID does not exist" containerID="2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.127049 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f"} err="failed to get container status \"2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f\": rpc error: code = NotFound desc = could not find container \"2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f\": container with ID starting with 2e1401071d848d309852eb1fd21f3ce9b4e3618b92c660db367c88e647969d7f not found: ID does not exist" Feb 19 13:30:52 crc kubenswrapper[4861]: I0219 13:30:52.134048 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b99dd6c84-fwhqt"] Feb 19 13:30:53 crc kubenswrapper[4861]: I0219 13:30:53.014639 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerStarted","Data":"1848c1ea92b73faedba64818bb40baac579256e0a56d4c003909ba690ac4045a"} Feb 19 13:30:53 crc kubenswrapper[4861]: I0219 13:30:53.015015 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:30:53 crc kubenswrapper[4861]: I0219 13:30:53.048365 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.983870355 podStartE2EDuration="7.048349872s" podCreationTimestamp="2026-02-19 13:30:46 +0000 UTC" firstStartedPulling="2026-02-19 13:30:47.781159316 +0000 UTC m=+1262.442262544" lastFinishedPulling="2026-02-19 13:30:51.845638803 +0000 UTC m=+1266.506742061" observedRunningTime="2026-02-19 13:30:53.044848868 +0000 UTC m=+1267.705952106" watchObservedRunningTime="2026-02-19 13:30:53.048349872 +0000 UTC m=+1267.709453100" Feb 19 13:30:53 crc kubenswrapper[4861]: I0219 13:30:53.990343 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" path="/var/lib/kubelet/pods/6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0/volumes" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.074496 4861 scope.go:117] "RemoveContainer" containerID="425487f97ef4116a4c5c37ee88a4206926a8b0f132f1a460674267c9c882809e" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.097260 4861 scope.go:117] "RemoveContainer" containerID="9a6b56ba5ab57a9f6009534ebf638e2ef109ea0b72e837976c5c240cde05078c" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.167228 4861 scope.go:117] "RemoveContainer" containerID="4fc806692ac80325104057f9868f40dfd771f6dbc215f1917bc3b0df676ea190" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.302781 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.388137 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.454275 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-dzd74"] Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.454586 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerName="dnsmasq-dns" containerID="cri-o://36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142" gracePeriod=10 Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.584017 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.646042 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.653443 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d9f88bd7-zq9nt"] Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.653678 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59d9f88bd7-zq9nt" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-api" containerID="cri-o://97fea45c3ade3e0a17678507ba1b8509b11c1b8398e3c984f00dc9c647fdeb6f" gracePeriod=30 Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.653738 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59d9f88bd7-zq9nt" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-httpd" containerID="cri-o://202486ab1c04a67d2cc963f93176c1a28491a7ad70e6b1662d3833b2cf389788" gracePeriod=30 Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.692188 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f585bc76f-dg9rf"] Feb 19 13:30:54 crc kubenswrapper[4861]: E0219 13:30:54.692638 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api-log" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.692650 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api-log" Feb 19 13:30:54 crc kubenswrapper[4861]: E0219 13:30:54.692678 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.692684 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.692857 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.692875 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1d0ec1-d77d-4a8e-ab33-ce11c2115ac0" containerName="barbican-api-log" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.693806 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.716678 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f585bc76f-dg9rf"] Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.828803 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-public-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.828879 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-internal-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.828917 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-combined-ca-bundle\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.828949 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfzrt\" (UniqueName: \"kubernetes.io/projected/cdaa2d03-6ae0-405a-af42-499d99ec711d-kube-api-access-dfzrt\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.829137 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-config\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.829210 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-httpd-config\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.829361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-ovndb-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931301 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-ovndb-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931719 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-public-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-internal-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-combined-ca-bundle\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931830 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfzrt\" (UniqueName: \"kubernetes.io/projected/cdaa2d03-6ae0-405a-af42-499d99ec711d-kube-api-access-dfzrt\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-config\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.931927 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-httpd-config\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.937912 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-internal-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.938436 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-httpd-config\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.939173 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-public-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.939894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-combined-ca-bundle\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.942102 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-ovndb-tls-certs\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.942995 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-config\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:54 crc kubenswrapper[4861]: I0219 13:30:54.953253 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfzrt\" (UniqueName: \"kubernetes.io/projected/cdaa2d03-6ae0-405a-af42-499d99ec711d-kube-api-access-dfzrt\") pod \"neutron-7f585bc76f-dg9rf\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.014470 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.022937 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59d9f88bd7-zq9nt" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": read tcp 10.217.0.2:50500->10.217.0.155:9696: read: connection reset by peer" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.060453 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.069414 4861 generic.go:334] "Generic (PLEG): container finished" podID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerID="36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142" exitCode=0 Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.069688 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="cinder-scheduler" containerID="cri-o://e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa" gracePeriod=30 Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.069948 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" event={"ID":"ed6f0ad1-4261-4369-9270-bb6f2aabc68a","Type":"ContainerDied","Data":"36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142"} Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.069995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" event={"ID":"ed6f0ad1-4261-4369-9270-bb6f2aabc68a","Type":"ContainerDied","Data":"c8f532ade0da409990458aa6354e0b3af08be7278fbd4857cc3c361db4ac656e"} Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.070032 4861 scope.go:117] "RemoveContainer" containerID="36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.070717 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="probe" containerID="cri-o://cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb" gracePeriod=30 Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.123839 4861 scope.go:117] "RemoveContainer" containerID="c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.184410 4861 scope.go:117] "RemoveContainer" containerID="36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142" Feb 19 13:30:55 crc kubenswrapper[4861]: E0219 13:30:55.186891 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142\": container with ID starting with 36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142 not found: ID does not exist" containerID="36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.186940 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142"} err="failed to get container status \"36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142\": rpc error: code = NotFound desc = could not find container \"36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142\": container with ID starting with 36095d8dace58542199462d7db345384b3b4a4dfba7fe0ba68d8f43490d87142 not found: ID does not exist" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.186973 4861 scope.go:117] "RemoveContainer" containerID="c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb" Feb 19 13:30:55 crc kubenswrapper[4861]: E0219 13:30:55.187973 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb\": container with ID starting with c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb not found: ID does not exist" containerID="c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.188012 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb"} err="failed to get container status \"c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb\": rpc error: code = NotFound desc = could not find container \"c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb\": container with ID starting with c7546fec94273eadd02f2c9ea3c7773ed0a984b9ae490224e2ab36203fa2addb not found: ID does not exist" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.235198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-nb\") pod \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.235258 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwk8\" (UniqueName: \"kubernetes.io/projected/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-kube-api-access-7rwk8\") pod \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.235315 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-sb\") pod \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.235384 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-swift-storage-0\") pod \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.235409 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-svc\") pod \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.235522 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-config\") pod \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\" (UID: \"ed6f0ad1-4261-4369-9270-bb6f2aabc68a\") " Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.241234 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-kube-api-access-7rwk8" (OuterVolumeSpecName: "kube-api-access-7rwk8") pod "ed6f0ad1-4261-4369-9270-bb6f2aabc68a" (UID: "ed6f0ad1-4261-4369-9270-bb6f2aabc68a"). InnerVolumeSpecName "kube-api-access-7rwk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.300333 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed6f0ad1-4261-4369-9270-bb6f2aabc68a" (UID: "ed6f0ad1-4261-4369-9270-bb6f2aabc68a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.322651 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed6f0ad1-4261-4369-9270-bb6f2aabc68a" (UID: "ed6f0ad1-4261-4369-9270-bb6f2aabc68a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339015 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-config" (OuterVolumeSpecName: "config") pod "ed6f0ad1-4261-4369-9270-bb6f2aabc68a" (UID: "ed6f0ad1-4261-4369-9270-bb6f2aabc68a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339300 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed6f0ad1-4261-4369-9270-bb6f2aabc68a" (UID: "ed6f0ad1-4261-4369-9270-bb6f2aabc68a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339789 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwk8\" (UniqueName: \"kubernetes.io/projected/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-kube-api-access-7rwk8\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339826 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339837 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339877 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.339887 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.346894 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed6f0ad1-4261-4369-9270-bb6f2aabc68a" (UID: "ed6f0ad1-4261-4369-9270-bb6f2aabc68a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.441299 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed6f0ad1-4261-4369-9270-bb6f2aabc68a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:55 crc kubenswrapper[4861]: I0219 13:30:55.607485 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f585bc76f-dg9rf"] Feb 19 13:30:55 crc kubenswrapper[4861]: W0219 13:30:55.610137 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdaa2d03_6ae0_405a_af42_499d99ec711d.slice/crio-7a4d203093c25a8bd89493b0c04ac98a0bc61a46fc5bbeb35c529cd9d56bc9c7 WatchSource:0}: Error finding container 7a4d203093c25a8bd89493b0c04ac98a0bc61a46fc5bbeb35c529cd9d56bc9c7: Status 404 returned error can't find the container with id 7a4d203093c25a8bd89493b0c04ac98a0bc61a46fc5bbeb35c529cd9d56bc9c7 Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.088897 4861 generic.go:334] "Generic (PLEG): container finished" podID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerID="202486ab1c04a67d2cc963f93176c1a28491a7ad70e6b1662d3833b2cf389788" exitCode=0 Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.089735 4861 generic.go:334] "Generic (PLEG): container finished" podID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerID="97fea45c3ade3e0a17678507ba1b8509b11c1b8398e3c984f00dc9c647fdeb6f" exitCode=0 Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.089848 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f88bd7-zq9nt" event={"ID":"8bbff609-754e-4955-8495-e3e1de7c0e05","Type":"ContainerDied","Data":"202486ab1c04a67d2cc963f93176c1a28491a7ad70e6b1662d3833b2cf389788"} Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.089917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f88bd7-zq9nt" event={"ID":"8bbff609-754e-4955-8495-e3e1de7c0e05","Type":"ContainerDied","Data":"97fea45c3ade3e0a17678507ba1b8509b11c1b8398e3c984f00dc9c647fdeb6f"} Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.093007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f585bc76f-dg9rf" event={"ID":"cdaa2d03-6ae0-405a-af42-499d99ec711d","Type":"ContainerStarted","Data":"e596ff917ea1fb5095cf558e3c5f097ddc50829b4c61ec3a615a77087e4cd4bb"} Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.093072 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f585bc76f-dg9rf" event={"ID":"cdaa2d03-6ae0-405a-af42-499d99ec711d","Type":"ContainerStarted","Data":"7a4d203093c25a8bd89493b0c04ac98a0bc61a46fc5bbeb35c529cd9d56bc9c7"} Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.094590 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-dzd74" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.100706 4861 generic.go:334] "Generic (PLEG): container finished" podID="8fe74cbc-610e-4331-87db-10858d03af8d" containerID="cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb" exitCode=0 Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.100772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fe74cbc-610e-4331-87db-10858d03af8d","Type":"ContainerDied","Data":"cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb"} Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.126849 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-dzd74"] Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.137652 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-dzd74"] Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.514701 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.666585 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-httpd-config\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.666699 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-public-tls-certs\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.666832 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-config\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.666860 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-ovndb-tls-certs\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.667867 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5td4l\" (UniqueName: \"kubernetes.io/projected/8bbff609-754e-4955-8495-e3e1de7c0e05-kube-api-access-5td4l\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.667963 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-internal-tls-certs\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.668008 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-combined-ca-bundle\") pod \"8bbff609-754e-4955-8495-e3e1de7c0e05\" (UID: \"8bbff609-754e-4955-8495-e3e1de7c0e05\") " Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.684723 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbff609-754e-4955-8495-e3e1de7c0e05-kube-api-access-5td4l" (OuterVolumeSpecName: "kube-api-access-5td4l") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "kube-api-access-5td4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.687492 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.718626 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.736449 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-config" (OuterVolumeSpecName: "config") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.741074 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.755506 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.767330 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.770807 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.770890 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.770951 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.771002 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.771055 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.771123 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5td4l\" (UniqueName: \"kubernetes.io/projected/8bbff609-754e-4955-8495-e3e1de7c0e05-kube-api-access-5td4l\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.774382 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8bbff609-754e-4955-8495-e3e1de7c0e05" (UID: "8bbff609-754e-4955-8495-e3e1de7c0e05"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:56 crc kubenswrapper[4861]: I0219 13:30:56.873105 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbff609-754e-4955-8495-e3e1de7c0e05-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.112721 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f585bc76f-dg9rf" event={"ID":"cdaa2d03-6ae0-405a-af42-499d99ec711d","Type":"ContainerStarted","Data":"4da13ead2ea9ec2a3cf985ce57e0d64f8641678421b9b3e6a3695e68e35cfeb4"} Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.112789 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.115289 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f88bd7-zq9nt" event={"ID":"8bbff609-754e-4955-8495-e3e1de7c0e05","Type":"ContainerDied","Data":"1cf7aadd098f564ffe1df71b7df0d97d11d71753ecebb0cdd461445946c47612"} Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.115324 4861 scope.go:117] "RemoveContainer" containerID="202486ab1c04a67d2cc963f93176c1a28491a7ad70e6b1662d3833b2cf389788" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.115471 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f88bd7-zq9nt" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.139021 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f585bc76f-dg9rf" podStartSLOduration=3.139001254 podStartE2EDuration="3.139001254s" podCreationTimestamp="2026-02-19 13:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:30:57.13773451 +0000 UTC m=+1271.798837758" watchObservedRunningTime="2026-02-19 13:30:57.139001254 +0000 UTC m=+1271.800104502" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.166171 4861 scope.go:117] "RemoveContainer" containerID="97fea45c3ade3e0a17678507ba1b8509b11c1b8398e3c984f00dc9c647fdeb6f" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.183400 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d9f88bd7-zq9nt"] Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.192042 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59d9f88bd7-zq9nt"] Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.995078 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" path="/var/lib/kubelet/pods/8bbff609-754e-4955-8495-e3e1de7c0e05/volumes" Feb 19 13:30:57 crc kubenswrapper[4861]: I0219 13:30:57.996099 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" path="/var/lib/kubelet/pods/ed6f0ad1-4261-4369-9270-bb6f2aabc68a/volumes" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.624480 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.707325 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data\") pod \"8fe74cbc-610e-4331-87db-10858d03af8d\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.707964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-scripts\") pod \"8fe74cbc-610e-4331-87db-10858d03af8d\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.708018 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data-custom\") pod \"8fe74cbc-610e-4331-87db-10858d03af8d\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.708095 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fe74cbc-610e-4331-87db-10858d03af8d-etc-machine-id\") pod \"8fe74cbc-610e-4331-87db-10858d03af8d\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.708133 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjgb\" (UniqueName: \"kubernetes.io/projected/8fe74cbc-610e-4331-87db-10858d03af8d-kube-api-access-npjgb\") pod \"8fe74cbc-610e-4331-87db-10858d03af8d\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.708243 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-combined-ca-bundle\") pod \"8fe74cbc-610e-4331-87db-10858d03af8d\" (UID: \"8fe74cbc-610e-4331-87db-10858d03af8d\") " Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.708881 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fe74cbc-610e-4331-87db-10858d03af8d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8fe74cbc-610e-4331-87db-10858d03af8d" (UID: "8fe74cbc-610e-4331-87db-10858d03af8d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.719819 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8fe74cbc-610e-4331-87db-10858d03af8d" (UID: "8fe74cbc-610e-4331-87db-10858d03af8d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.720488 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-scripts" (OuterVolumeSpecName: "scripts") pod "8fe74cbc-610e-4331-87db-10858d03af8d" (UID: "8fe74cbc-610e-4331-87db-10858d03af8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.738395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe74cbc-610e-4331-87db-10858d03af8d-kube-api-access-npjgb" (OuterVolumeSpecName: "kube-api-access-npjgb") pod "8fe74cbc-610e-4331-87db-10858d03af8d" (UID: "8fe74cbc-610e-4331-87db-10858d03af8d"). InnerVolumeSpecName "kube-api-access-npjgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.773786 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fe74cbc-610e-4331-87db-10858d03af8d" (UID: "8fe74cbc-610e-4331-87db-10858d03af8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.810255 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.810415 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.810514 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.810605 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fe74cbc-610e-4331-87db-10858d03af8d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.810682 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjgb\" (UniqueName: \"kubernetes.io/projected/8fe74cbc-610e-4331-87db-10858d03af8d-kube-api-access-npjgb\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.840407 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data" (OuterVolumeSpecName: "config-data") pod "8fe74cbc-610e-4331-87db-10858d03af8d" (UID: "8fe74cbc-610e-4331-87db-10858d03af8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:58 crc kubenswrapper[4861]: I0219 13:30:58.912632 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe74cbc-610e-4331-87db-10858d03af8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.139211 4861 generic.go:334] "Generic (PLEG): container finished" podID="8fe74cbc-610e-4331-87db-10858d03af8d" containerID="e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa" exitCode=0 Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.139283 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.139337 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fe74cbc-610e-4331-87db-10858d03af8d","Type":"ContainerDied","Data":"e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa"} Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.139414 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fe74cbc-610e-4331-87db-10858d03af8d","Type":"ContainerDied","Data":"70c667bbdc323d489993f680848e7e6b785fea20e920a01ab4b00d33aeb8617a"} Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.139491 4861 scope.go:117] "RemoveContainer" containerID="cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.175568 4861 scope.go:117] "RemoveContainer" containerID="e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.204140 4861 scope.go:117] "RemoveContainer" containerID="cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.204841 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb\": container with ID starting with cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb not found: ID does not exist" containerID="cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.204884 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb"} err="failed to get container status \"cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb\": rpc error: code = NotFound desc = could not find container \"cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb\": container with ID starting with cdffe44b0c56542d28e03fd2a6612d8d327d01a42e691d5c60eb2ce0448ea7fb not found: ID does not exist" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.204917 4861 scope.go:117] "RemoveContainer" containerID="e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.205295 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa\": container with ID starting with e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa not found: ID does not exist" containerID="e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.205321 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa"} err="failed to get container status \"e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa\": rpc error: code = NotFound desc = could not find container \"e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa\": container with ID starting with e7dae71a45e235fed78e66869bdcb7fb6ead8d73f1b2c3134615b30dea82ddaa not found: ID does not exist" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.216488 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.249724 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.261509 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.262040 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="probe" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262104 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="probe" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.262180 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-api" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262248 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-api" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.262314 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerName="init" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262363 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerName="init" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.262444 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="cinder-scheduler" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262497 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="cinder-scheduler" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.262574 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerName="dnsmasq-dns" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262623 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerName="dnsmasq-dns" Feb 19 13:30:59 crc kubenswrapper[4861]: E0219 13:30:59.262689 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-httpd" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262738 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-httpd" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.262971 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="cinder-scheduler" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.263043 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" containerName="probe" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.263099 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-api" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.263156 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbff609-754e-4955-8495-e3e1de7c0e05" containerName="neutron-httpd" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.263216 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6f0ad1-4261-4369-9270-bb6f2aabc68a" containerName="dnsmasq-dns" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.264140 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.266411 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.272065 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.435593 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.435649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.435679 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.435720 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxlz\" (UniqueName: \"kubernetes.io/projected/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-kube-api-access-6jxlz\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.435770 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.435825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.537673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxlz\" (UniqueName: \"kubernetes.io/projected/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-kube-api-access-6jxlz\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.538054 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.538866 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.538925 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.538956 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.539022 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.539067 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.545475 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.545904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.561113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.568109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.569918 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxlz\" (UniqueName: \"kubernetes.io/projected/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-kube-api-access-6jxlz\") pod \"cinder-scheduler-0\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.580002 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:30:59 crc kubenswrapper[4861]: I0219 13:30:59.986176 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe74cbc-610e-4331-87db-10858d03af8d" path="/var/lib/kubelet/pods/8fe74cbc-610e-4331-87db-10858d03af8d/volumes" Feb 19 13:31:00 crc kubenswrapper[4861]: I0219 13:31:00.050982 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:31:00 crc kubenswrapper[4861]: I0219 13:31:00.161450 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a","Type":"ContainerStarted","Data":"dc9e11a4270ec7fafbd8e72206f731cb4ae61a39e2a9eb320a3081a5d10f8c13"} Feb 19 13:31:00 crc kubenswrapper[4861]: I0219 13:31:00.900085 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:31:00 crc kubenswrapper[4861]: I0219 13:31:00.966960 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:31:01 crc kubenswrapper[4861]: I0219 13:31:01.231681 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:31:01 crc kubenswrapper[4861]: I0219 13:31:01.233150 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:31:01 crc kubenswrapper[4861]: I0219 13:31:01.244561 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a","Type":"ContainerStarted","Data":"55e9e83bff1da6a4f3c4c60dbe202de73f4077183db64ccdd0ed5fa347035067"} Feb 19 13:31:01 crc kubenswrapper[4861]: I0219 13:31:01.363278 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7549c5f8db-8jjpm"] Feb 19 13:31:01 crc kubenswrapper[4861]: I0219 13:31:01.829889 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:31:02 crc kubenswrapper[4861]: I0219 13:31:02.253675 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a","Type":"ContainerStarted","Data":"6726e1fe83695be57e870a22efef89e68bbd009b8791859e1a75a341ca4e9ea7"} Feb 19 13:31:02 crc kubenswrapper[4861]: I0219 13:31:02.254138 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7549c5f8db-8jjpm" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-log" containerID="cri-o://858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f" gracePeriod=30 Feb 19 13:31:02 crc kubenswrapper[4861]: I0219 13:31:02.254625 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7549c5f8db-8jjpm" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-api" containerID="cri-o://cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4" gracePeriod=30 Feb 19 13:31:02 crc kubenswrapper[4861]: I0219 13:31:02.281597 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.281578095 podStartE2EDuration="3.281578095s" podCreationTimestamp="2026-02-19 13:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:02.279961471 +0000 UTC m=+1276.941064709" watchObservedRunningTime="2026-02-19 13:31:02.281578095 +0000 UTC m=+1276.942681323" Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.265582 4861 generic.go:334] "Generic (PLEG): container finished" podID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerID="858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f" exitCode=143 Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.265689 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7549c5f8db-8jjpm" event={"ID":"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551","Type":"ContainerDied","Data":"858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f"} Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.834622 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.835006 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.835066 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.835786 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8231b7d6cc8b5ea6124bfdb8ee2cfd7fd221648893a967e4427c88c18dc3ef9"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:31:03 crc kubenswrapper[4861]: I0219 13:31:03.835876 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://a8231b7d6cc8b5ea6124bfdb8ee2cfd7fd221648893a967e4427c88c18dc3ef9" gracePeriod=600 Feb 19 13:31:04 crc kubenswrapper[4861]: I0219 13:31:04.279990 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="a8231b7d6cc8b5ea6124bfdb8ee2cfd7fd221648893a967e4427c88c18dc3ef9" exitCode=0 Feb 19 13:31:04 crc kubenswrapper[4861]: I0219 13:31:04.280014 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"a8231b7d6cc8b5ea6124bfdb8ee2cfd7fd221648893a967e4427c88c18dc3ef9"} Feb 19 13:31:04 crc kubenswrapper[4861]: I0219 13:31:04.280384 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"8a10bd53a42d4b75d132d094be46db575b69579a23570e97c0f7e4e90137176e"} Feb 19 13:31:04 crc kubenswrapper[4861]: I0219 13:31:04.280438 4861 scope.go:117] "RemoveContainer" containerID="b97bdd517e8a4057d6d42657d06891ca0d7f0204df355e8596a23050ecb1ab6b" Feb 19 13:31:04 crc kubenswrapper[4861]: I0219 13:31:04.582523 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.900438 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.969890 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-internal-tls-certs\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.969946 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-public-tls-certs\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.970002 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-logs\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.970029 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-config-data\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.970115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drn56\" (UniqueName: \"kubernetes.io/projected/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-kube-api-access-drn56\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.970207 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-combined-ca-bundle\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.970238 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-scripts\") pod \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\" (UID: \"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551\") " Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.972022 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-logs" (OuterVolumeSpecName: "logs") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:05 crc kubenswrapper[4861]: I0219 13:31:05.974999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-scripts" (OuterVolumeSpecName: "scripts") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.005183 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-kube-api-access-drn56" (OuterVolumeSpecName: "kube-api-access-drn56") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "kube-api-access-drn56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.065385 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.072926 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drn56\" (UniqueName: \"kubernetes.io/projected/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-kube-api-access-drn56\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.072950 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.072960 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.072969 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.078969 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:06 crc kubenswrapper[4861]: E0219 13:31:06.079365 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-log" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.079384 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-log" Feb 19 13:31:06 crc kubenswrapper[4861]: E0219 13:31:06.079415 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-api" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.079436 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-api" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.079588 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-log" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.079599 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerName="placement-api" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.080316 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.080522 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.081957 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.084784 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.084981 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xjhd6" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.085236 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.101573 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-config-data" (OuterVolumeSpecName: "config-data") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.109758 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" (UID: "61eb2f94-44a8-4db5-9d17-9c4bd7bbc551"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176570 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8d6\" (UniqueName: \"kubernetes.io/projected/b728b185-f17c-4962-affd-10559eb0e88c-kube-api-access-9n8d6\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176667 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176700 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176743 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config-secret\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176836 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176852 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.176863 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.278343 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8d6\" (UniqueName: \"kubernetes.io/projected/b728b185-f17c-4962-affd-10559eb0e88c-kube-api-access-9n8d6\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.278729 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.278758 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.278802 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config-secret\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.279810 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.283910 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config-secret\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.284194 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.294335 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8d6\" (UniqueName: \"kubernetes.io/projected/b728b185-f17c-4962-affd-10559eb0e88c-kube-api-access-9n8d6\") pod \"openstackclient\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.308709 4861 generic.go:334] "Generic (PLEG): container finished" podID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" containerID="cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4" exitCode=0 Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.308753 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7549c5f8db-8jjpm" event={"ID":"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551","Type":"ContainerDied","Data":"cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4"} Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.308780 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7549c5f8db-8jjpm" event={"ID":"61eb2f94-44a8-4db5-9d17-9c4bd7bbc551","Type":"ContainerDied","Data":"0affed1b8f1419a02b34fe57e28102de06b58b19207138fb1e042bc63f558d6e"} Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.308797 4861 scope.go:117] "RemoveContainer" containerID="cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.308921 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7549c5f8db-8jjpm" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.347463 4861 scope.go:117] "RemoveContainer" containerID="858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.375185 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.376074 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.410297 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7549c5f8db-8jjpm"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.410350 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7549c5f8db-8jjpm"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.410484 4861 scope.go:117] "RemoveContainer" containerID="cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4" Feb 19 13:31:06 crc kubenswrapper[4861]: E0219 13:31:06.411798 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4\": container with ID starting with cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4 not found: ID does not exist" containerID="cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.411908 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4"} err="failed to get container status \"cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4\": rpc error: code = NotFound desc = could not find container \"cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4\": container with ID starting with cac3d21c0f1778591b54b4a019bb294954bdf9f1e30a0723d99b86255e4dbfe4 not found: ID does not exist" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.411947 4861 scope.go:117] "RemoveContainer" containerID="858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f" Feb 19 13:31:06 crc kubenswrapper[4861]: E0219 13:31:06.412389 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f\": container with ID starting with 858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f not found: ID does not exist" containerID="858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.412504 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f"} err="failed to get container status \"858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f\": rpc error: code = NotFound desc = could not find container \"858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f\": container with ID starting with 858c872d2372a72727c46016fe7cdb02a96c24d499f480e8d0ff260bc99f322f not found: ID does not exist" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.419015 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.426098 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.427269 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.433317 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.481008 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.481055 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnvs\" (UniqueName: \"kubernetes.io/projected/4524053f-0367-4216-8916-1b5315dbe8d8-kube-api-access-xsnvs\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.481091 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.481177 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: E0219 13:31:06.536247 4861 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 13:31:06 crc kubenswrapper[4861]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b728b185-f17c-4962-affd-10559eb0e88c_0(4f9c2bf6593263043b1d6052efde03759b0689f3f13ce2d0483c809af91a353f): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4f9c2bf6593263043b1d6052efde03759b0689f3f13ce2d0483c809af91a353f" Netns:"/var/run/netns/f582f8f5-24d6-4d84-abed-694847c9355d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4f9c2bf6593263043b1d6052efde03759b0689f3f13ce2d0483c809af91a353f;K8S_POD_UID=b728b185-f17c-4962-affd-10559eb0e88c" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b728b185-f17c-4962-affd-10559eb0e88c]: expected pod UID "b728b185-f17c-4962-affd-10559eb0e88c" but got "4524053f-0367-4216-8916-1b5315dbe8d8" from Kube API Feb 19 13:31:06 crc kubenswrapper[4861]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 13:31:06 crc kubenswrapper[4861]: > Feb 19 13:31:06 crc kubenswrapper[4861]: E0219 13:31:06.536322 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 13:31:06 crc kubenswrapper[4861]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b728b185-f17c-4962-affd-10559eb0e88c_0(4f9c2bf6593263043b1d6052efde03759b0689f3f13ce2d0483c809af91a353f): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4f9c2bf6593263043b1d6052efde03759b0689f3f13ce2d0483c809af91a353f" Netns:"/var/run/netns/f582f8f5-24d6-4d84-abed-694847c9355d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4f9c2bf6593263043b1d6052efde03759b0689f3f13ce2d0483c809af91a353f;K8S_POD_UID=b728b185-f17c-4962-affd-10559eb0e88c" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b728b185-f17c-4962-affd-10559eb0e88c]: expected pod UID "b728b185-f17c-4962-affd-10559eb0e88c" but got "4524053f-0367-4216-8916-1b5315dbe8d8" from Kube API Feb 19 13:31:06 crc kubenswrapper[4861]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 13:31:06 crc kubenswrapper[4861]: > pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.582916 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsnvs\" (UniqueName: \"kubernetes.io/projected/4524053f-0367-4216-8916-1b5315dbe8d8-kube-api-access-xsnvs\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.582993 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.583091 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.583156 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.584368 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.589010 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.590639 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.601605 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsnvs\" (UniqueName: \"kubernetes.io/projected/4524053f-0367-4216-8916-1b5315dbe8d8-kube-api-access-xsnvs\") pod \"openstackclient\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " pod="openstack/openstackclient" Feb 19 13:31:06 crc kubenswrapper[4861]: I0219 13:31:06.752835 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.200633 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.335652 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.337181 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4524053f-0367-4216-8916-1b5315dbe8d8","Type":"ContainerStarted","Data":"397b32fa4b225e1bf839bbe4f55e3d196520f3912d7df859717d736623fd1961"} Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.341909 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b728b185-f17c-4962-affd-10559eb0e88c" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.350038 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.406273 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config-secret\") pod \"b728b185-f17c-4962-affd-10559eb0e88c\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.406326 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-combined-ca-bundle\") pod \"b728b185-f17c-4962-affd-10559eb0e88c\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.406359 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n8d6\" (UniqueName: \"kubernetes.io/projected/b728b185-f17c-4962-affd-10559eb0e88c-kube-api-access-9n8d6\") pod \"b728b185-f17c-4962-affd-10559eb0e88c\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.406384 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config\") pod \"b728b185-f17c-4962-affd-10559eb0e88c\" (UID: \"b728b185-f17c-4962-affd-10559eb0e88c\") " Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.407438 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b728b185-f17c-4962-affd-10559eb0e88c" (UID: "b728b185-f17c-4962-affd-10559eb0e88c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.411974 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b728b185-f17c-4962-affd-10559eb0e88c-kube-api-access-9n8d6" (OuterVolumeSpecName: "kube-api-access-9n8d6") pod "b728b185-f17c-4962-affd-10559eb0e88c" (UID: "b728b185-f17c-4962-affd-10559eb0e88c"). InnerVolumeSpecName "kube-api-access-9n8d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.412289 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b728b185-f17c-4962-affd-10559eb0e88c" (UID: "b728b185-f17c-4962-affd-10559eb0e88c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.412627 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b728b185-f17c-4962-affd-10559eb0e88c" (UID: "b728b185-f17c-4962-affd-10559eb0e88c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.510926 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.510966 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728b185-f17c-4962-affd-10559eb0e88c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.510978 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n8d6\" (UniqueName: \"kubernetes.io/projected/b728b185-f17c-4962-affd-10559eb0e88c-kube-api-access-9n8d6\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.510990 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b728b185-f17c-4962-affd-10559eb0e88c-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.986879 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61eb2f94-44a8-4db5-9d17-9c4bd7bbc551" path="/var/lib/kubelet/pods/61eb2f94-44a8-4db5-9d17-9c4bd7bbc551/volumes" Feb 19 13:31:07 crc kubenswrapper[4861]: I0219 13:31:07.987992 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b728b185-f17c-4962-affd-10559eb0e88c" path="/var/lib/kubelet/pods/b728b185-f17c-4962-affd-10559eb0e88c/volumes" Feb 19 13:31:08 crc kubenswrapper[4861]: I0219 13:31:08.346484 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:31:08 crc kubenswrapper[4861]: I0219 13:31:08.354913 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b728b185-f17c-4962-affd-10559eb0e88c" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.590483 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-679b4d4449-j6f75"] Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.594450 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.596696 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.596799 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.596884 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.607897 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-679b4d4449-j6f75"] Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxxx\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-kube-api-access-vwxxx\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649461 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-etc-swift\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649509 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-combined-ca-bundle\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649534 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-internal-tls-certs\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649552 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-public-tls-certs\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649578 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-run-httpd\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649600 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-config-data\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.649682 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-log-httpd\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751404 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-combined-ca-bundle\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-internal-tls-certs\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751522 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-public-tls-certs\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751569 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-run-httpd\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751591 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-config-data\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751674 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-log-httpd\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751707 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxxx\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-kube-api-access-vwxxx\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.751742 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-etc-swift\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.752372 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-run-httpd\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.752928 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-log-httpd\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.757032 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-internal-tls-certs\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.757682 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-public-tls-certs\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.758461 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-etc-swift\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.762108 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-config-data\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.765931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-combined-ca-bundle\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.775023 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxxx\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-kube-api-access-vwxxx\") pod \"swift-proxy-679b4d4449-j6f75\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.802876 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 13:31:09 crc kubenswrapper[4861]: I0219 13:31:09.929865 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:10 crc kubenswrapper[4861]: I0219 13:31:10.504714 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-679b4d4449-j6f75"] Feb 19 13:31:11 crc kubenswrapper[4861]: I0219 13:31:11.406765 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-679b4d4449-j6f75" event={"ID":"c3559fea-5929-4904-9be2-136f10ea1023","Type":"ContainerStarted","Data":"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a"} Feb 19 13:31:11 crc kubenswrapper[4861]: I0219 13:31:11.407575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-679b4d4449-j6f75" event={"ID":"c3559fea-5929-4904-9be2-136f10ea1023","Type":"ContainerStarted","Data":"01610bb6a7e62a122f0414517568d090b81343b2906e902965fa53da6cb1417b"} Feb 19 13:31:12 crc kubenswrapper[4861]: I0219 13:31:12.421456 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-679b4d4449-j6f75" event={"ID":"c3559fea-5929-4904-9be2-136f10ea1023","Type":"ContainerStarted","Data":"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14"} Feb 19 13:31:13 crc kubenswrapper[4861]: I0219 13:31:13.430147 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:13 crc kubenswrapper[4861]: I0219 13:31:13.430565 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:13 crc kubenswrapper[4861]: I0219 13:31:13.466041 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-679b4d4449-j6f75" podStartSLOduration=4.466018712 podStartE2EDuration="4.466018712s" podCreationTimestamp="2026-02-19 13:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:13.45886177 +0000 UTC m=+1288.119964998" watchObservedRunningTime="2026-02-19 13:31:13.466018712 +0000 UTC m=+1288.127121930" Feb 19 13:31:14 crc kubenswrapper[4861]: I0219 13:31:14.591224 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.168:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.193658 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.194577 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-central-agent" containerID="cri-o://0ca383abd9d8d7aa0aaec2c634d66bddee4bcc2143ed5e972f87fad39223bd34" gracePeriod=30 Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.194728 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="proxy-httpd" containerID="cri-o://1848c1ea92b73faedba64818bb40baac579256e0a56d4c003909ba690ac4045a" gracePeriod=30 Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.194784 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="sg-core" containerID="cri-o://bbc809fb7dfe766f6565a4c36333e9c1f149cd5154595f371e608aa4a060e813" gracePeriod=30 Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.194822 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-notification-agent" containerID="cri-o://de88dd247c3060a693e657e9d8814c568ae40cd7210ab38d92b7095268767f67" gracePeriod=30 Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.213989 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": EOF" Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.462203 4861 generic.go:334] "Generic (PLEG): container finished" podID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerID="bbc809fb7dfe766f6565a4c36333e9c1f149cd5154595f371e608aa4a060e813" exitCode=2 Feb 19 13:31:16 crc kubenswrapper[4861]: I0219 13:31:16.462520 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerDied","Data":"bbc809fb7dfe766f6565a4c36333e9c1f149cd5154595f371e608aa4a060e813"} Feb 19 13:31:17 crc kubenswrapper[4861]: I0219 13:31:17.206959 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": dial tcp 10.217.0.169:3000: connect: connection refused" Feb 19 13:31:17 crc kubenswrapper[4861]: I0219 13:31:17.474733 4861 generic.go:334] "Generic (PLEG): container finished" podID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerID="1848c1ea92b73faedba64818bb40baac579256e0a56d4c003909ba690ac4045a" exitCode=0 Feb 19 13:31:17 crc kubenswrapper[4861]: I0219 13:31:17.474769 4861 generic.go:334] "Generic (PLEG): container finished" podID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerID="0ca383abd9d8d7aa0aaec2c634d66bddee4bcc2143ed5e972f87fad39223bd34" exitCode=0 Feb 19 13:31:17 crc kubenswrapper[4861]: I0219 13:31:17.474779 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerDied","Data":"1848c1ea92b73faedba64818bb40baac579256e0a56d4c003909ba690ac4045a"} Feb 19 13:31:17 crc kubenswrapper[4861]: I0219 13:31:17.474848 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerDied","Data":"0ca383abd9d8d7aa0aaec2c634d66bddee4bcc2143ed5e972f87fad39223bd34"} Feb 19 13:31:18 crc kubenswrapper[4861]: I0219 13:31:18.491155 4861 generic.go:334] "Generic (PLEG): container finished" podID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerID="417d3a4d3982265f6c2a3418f27cd6a183f673fb9f9333334e21c9117e450900" exitCode=137 Feb 19 13:31:18 crc kubenswrapper[4861]: I0219 13:31:18.491298 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a","Type":"ContainerDied","Data":"417d3a4d3982265f6c2a3418f27cd6a183f673fb9f9333334e21c9117e450900"} Feb 19 13:31:19 crc kubenswrapper[4861]: I0219 13:31:19.548022 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.168:8776/healthcheck\": dial tcp 10.217.0.168:8776: connect: connection refused" Feb 19 13:31:19 crc kubenswrapper[4861]: I0219 13:31:19.939812 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:19 crc kubenswrapper[4861]: I0219 13:31:19.943745 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.035071 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.096822 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-etc-machine-id\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.096984 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69qv5\" (UniqueName: \"kubernetes.io/projected/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-kube-api-access-69qv5\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.097091 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.097110 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-combined-ca-bundle\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.097156 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data-custom\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.097203 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-scripts\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.097225 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-logs\") pod \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\" (UID: \"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.097819 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.101081 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-logs" (OuterVolumeSpecName: "logs") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.104777 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-scripts" (OuterVolumeSpecName: "scripts") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.105516 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.106611 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-kube-api-access-69qv5" (OuterVolumeSpecName: "kube-api-access-69qv5") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "kube-api-access-69qv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.143441 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.197890 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data" (OuterVolumeSpecName: "config-data") pod "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" (UID: "ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.199925 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.199963 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69qv5\" (UniqueName: \"kubernetes.io/projected/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-kube-api-access-69qv5\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.199979 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.199990 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.200001 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.200011 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.200022 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.520821 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4524053f-0367-4216-8916-1b5315dbe8d8","Type":"ContainerStarted","Data":"5ed4b552bcd4114fdac5d241534550b1d5ddde0aea8ceaa83bb843a83f153d3e"} Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.524559 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a","Type":"ContainerDied","Data":"cbc496a448fc72cc7cfaf8955211dcef40b645017cd615c9efe322768d3467ea"} Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.524599 4861 scope.go:117] "RemoveContainer" containerID="417d3a4d3982265f6c2a3418f27cd6a183f673fb9f9333334e21c9117e450900" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.524696 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.531508 4861 generic.go:334] "Generic (PLEG): container finished" podID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerID="de88dd247c3060a693e657e9d8814c568ae40cd7210ab38d92b7095268767f67" exitCode=0 Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.531599 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerDied","Data":"de88dd247c3060a693e657e9d8814c568ae40cd7210ab38d92b7095268767f67"} Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.542625 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.9366881870000001 podStartE2EDuration="14.542599105s" podCreationTimestamp="2026-02-19 13:31:06 +0000 UTC" firstStartedPulling="2026-02-19 13:31:07.204832168 +0000 UTC m=+1281.865935396" lastFinishedPulling="2026-02-19 13:31:19.810743046 +0000 UTC m=+1294.471846314" observedRunningTime="2026-02-19 13:31:20.533087789 +0000 UTC m=+1295.194191017" watchObservedRunningTime="2026-02-19 13:31:20.542599105 +0000 UTC m=+1295.203702333" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.560025 4861 scope.go:117] "RemoveContainer" containerID="b08318244310bc07c2fdac8035eb3cf117c22c2b34666ccdf97ef1a15589fff9" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.579553 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.613278 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.635091 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:31:20 crc kubenswrapper[4861]: E0219 13:31:20.635491 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.635503 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api" Feb 19 13:31:20 crc kubenswrapper[4861]: E0219 13:31:20.635533 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api-log" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.635540 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api-log" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.635921 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api-log" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.635948 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" containerName="cinder-api" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.637208 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.645044 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.645614 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.645817 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.645926 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.709518 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.709575 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b863561a-440f-4e92-a8f3-4786a24d0a5f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.709802 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.709897 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.709955 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b863561a-440f-4e92-a8f3-4786a24d0a5f-logs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.709992 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/b863561a-440f-4e92-a8f3-4786a24d0a5f-kube-api-access-vbsmr\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.710201 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.710229 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.710278 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-scripts\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.716656 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-run-httpd\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811396 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-config-data\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811434 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-scripts\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811482 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-sg-core-conf-yaml\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811577 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjkpg\" (UniqueName: \"kubernetes.io/projected/ee4275ce-a2c5-4129-858f-48409cb928d7-kube-api-access-mjkpg\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811604 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-log-httpd\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-combined-ca-bundle\") pod \"ee4275ce-a2c5-4129-858f-48409cb928d7\" (UID: \"ee4275ce-a2c5-4129-858f-48409cb928d7\") " Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811857 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811893 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b863561a-440f-4e92-a8f3-4786a24d0a5f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.811882 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812004 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b863561a-440f-4e92-a8f3-4786a24d0a5f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812132 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812184 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812226 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b863561a-440f-4e92-a8f3-4786a24d0a5f-logs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812265 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/b863561a-440f-4e92-a8f3-4786a24d0a5f-kube-api-access-vbsmr\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812316 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812382 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812440 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812497 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-scripts\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812690 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812710 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee4275ce-a2c5-4129-858f-48409cb928d7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.812863 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b863561a-440f-4e92-a8f3-4786a24d0a5f-logs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.817235 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4275ce-a2c5-4129-858f-48409cb928d7-kube-api-access-mjkpg" (OuterVolumeSpecName: "kube-api-access-mjkpg") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "kube-api-access-mjkpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.817656 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-scripts" (OuterVolumeSpecName: "scripts") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.818163 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.819783 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.820579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.820975 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.823784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-scripts\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.832266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.833118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/b863561a-440f-4e92-a8f3-4786a24d0a5f-kube-api-access-vbsmr\") pod \"cinder-api-0\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " pod="openstack/cinder-api-0" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.854698 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.915754 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.915955 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.915965 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjkpg\" (UniqueName: \"kubernetes.io/projected/ee4275ce-a2c5-4129-858f-48409cb928d7-kube-api-access-mjkpg\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.928905 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-config-data" (OuterVolumeSpecName: "config-data") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.931802 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4275ce-a2c5-4129-858f-48409cb928d7" (UID: "ee4275ce-a2c5-4129-858f-48409cb928d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:20 crc kubenswrapper[4861]: I0219 13:31:20.959430 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.018006 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.018042 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4275ce-a2c5-4129-858f-48409cb928d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.440398 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.539970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b863561a-440f-4e92-a8f3-4786a24d0a5f","Type":"ContainerStarted","Data":"1764f4e8bd03c92dac341d90c365de2ad0abb7ef260faf7b324ad9a4fac67542"} Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.543095 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.544701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee4275ce-a2c5-4129-858f-48409cb928d7","Type":"ContainerDied","Data":"c6877cbc3eb0553a8ac2f65907050a2c4622d5c962bde0a9535dcbe0cb66981c"} Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.544741 4861 scope.go:117] "RemoveContainer" containerID="1848c1ea92b73faedba64818bb40baac579256e0a56d4c003909ba690ac4045a" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.574884 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.578092 4861 scope.go:117] "RemoveContainer" containerID="bbc809fb7dfe766f6565a4c36333e9c1f149cd5154595f371e608aa4a060e813" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.586786 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.599570 4861 scope.go:117] "RemoveContainer" containerID="de88dd247c3060a693e657e9d8814c568ae40cd7210ab38d92b7095268767f67" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.601489 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:21 crc kubenswrapper[4861]: E0219 13:31:21.601815 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="sg-core" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.601831 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="sg-core" Feb 19 13:31:21 crc kubenswrapper[4861]: E0219 13:31:21.601842 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-central-agent" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.601848 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-central-agent" Feb 19 13:31:21 crc kubenswrapper[4861]: E0219 13:31:21.601864 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-notification-agent" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.601871 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-notification-agent" Feb 19 13:31:21 crc kubenswrapper[4861]: E0219 13:31:21.601889 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="proxy-httpd" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.601894 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="proxy-httpd" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.602071 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-notification-agent" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.602082 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="proxy-httpd" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.602093 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="sg-core" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.602103 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" containerName="ceilometer-central-agent" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.603765 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.610022 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.610849 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.617629 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.633840 4861 scope.go:117] "RemoveContainer" containerID="0ca383abd9d8d7aa0aaec2c634d66bddee4bcc2143ed5e972f87fad39223bd34" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732107 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-scripts\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732430 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-config-data\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732517 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rtr\" (UniqueName: \"kubernetes.io/projected/9fcfada1-405f-4c06-a7a8-749daafe8297-kube-api-access-v8rtr\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732652 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732735 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732858 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-log-httpd\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.732934 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-run-httpd\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.834835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-log-httpd\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.834895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-run-httpd\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.834930 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-scripts\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.834974 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-config-data\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.834997 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rtr\" (UniqueName: \"kubernetes.io/projected/9fcfada1-405f-4c06-a7a8-749daafe8297-kube-api-access-v8rtr\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.835053 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.835075 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.835379 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-run-httpd\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.835559 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-log-httpd\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.840926 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.844625 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-scripts\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.856989 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.858575 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-config-data\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.865245 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rtr\" (UniqueName: \"kubernetes.io/projected/9fcfada1-405f-4c06-a7a8-749daafe8297-kube-api-access-v8rtr\") pod \"ceilometer-0\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.925406 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.993292 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4275ce-a2c5-4129-858f-48409cb928d7" path="/var/lib/kubelet/pods/ee4275ce-a2c5-4129-858f-48409cb928d7/volumes" Feb 19 13:31:21 crc kubenswrapper[4861]: I0219 13:31:21.994681 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a" path="/var/lib/kubelet/pods/ffdea62d-aea8-4ae6-a1d3-445b6f4e1f8a/volumes" Feb 19 13:31:22 crc kubenswrapper[4861]: I0219 13:31:22.363567 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:22 crc kubenswrapper[4861]: W0219 13:31:22.373409 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fcfada1_405f_4c06_a7a8_749daafe8297.slice/crio-4ca3a0f33a48f8e1f037e9ed4fe87ce81ecee55638cac99bd39ea4b438483725 WatchSource:0}: Error finding container 4ca3a0f33a48f8e1f037e9ed4fe87ce81ecee55638cac99bd39ea4b438483725: Status 404 returned error can't find the container with id 4ca3a0f33a48f8e1f037e9ed4fe87ce81ecee55638cac99bd39ea4b438483725 Feb 19 13:31:22 crc kubenswrapper[4861]: I0219 13:31:22.556616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b863561a-440f-4e92-a8f3-4786a24d0a5f","Type":"ContainerStarted","Data":"08f5ede146101abfdbe72fa01b651ee0b64dd6fc80f2a9cb3fa76ff9918744f3"} Feb 19 13:31:22 crc kubenswrapper[4861]: I0219 13:31:22.560293 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerStarted","Data":"4ca3a0f33a48f8e1f037e9ed4fe87ce81ecee55638cac99bd39ea4b438483725"} Feb 19 13:31:23 crc kubenswrapper[4861]: I0219 13:31:23.570167 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b863561a-440f-4e92-a8f3-4786a24d0a5f","Type":"ContainerStarted","Data":"d730aafec31ebf1d1d4d0bbbdd71e711bc2fd55423001647b8861204d4936465"} Feb 19 13:31:23 crc kubenswrapper[4861]: I0219 13:31:23.571476 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 13:31:23 crc kubenswrapper[4861]: I0219 13:31:23.575054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerStarted","Data":"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642"} Feb 19 13:31:23 crc kubenswrapper[4861]: I0219 13:31:23.594930 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.594909054 podStartE2EDuration="3.594909054s" podCreationTimestamp="2026-02-19 13:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:23.58770015 +0000 UTC m=+1298.248803378" watchObservedRunningTime="2026-02-19 13:31:23.594909054 +0000 UTC m=+1298.256012282" Feb 19 13:31:24 crc kubenswrapper[4861]: I0219 13:31:24.589003 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerStarted","Data":"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53"} Feb 19 13:31:24 crc kubenswrapper[4861]: I0219 13:31:24.589332 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerStarted","Data":"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa"} Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.028076 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.117919 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f6c7cb7cb-j5rts"] Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.118294 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f6c7cb7cb-j5rts" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-api" containerID="cri-o://f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc" gracePeriod=30 Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.118496 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f6c7cb7cb-j5rts" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-httpd" containerID="cri-o://9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d" gracePeriod=30 Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.600718 4861 generic.go:334] "Generic (PLEG): container finished" podID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerID="9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d" exitCode=0 Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.601116 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f6c7cb7cb-j5rts" event={"ID":"274eb243-db3d-4ad8-b2cd-2ff23017ac82","Type":"ContainerDied","Data":"9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d"} Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.604742 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerStarted","Data":"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae"} Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.604900 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:31:25 crc kubenswrapper[4861]: I0219 13:31:25.635340 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.691365622 podStartE2EDuration="4.635315692s" podCreationTimestamp="2026-02-19 13:31:21 +0000 UTC" firstStartedPulling="2026-02-19 13:31:22.378976829 +0000 UTC m=+1297.040080067" lastFinishedPulling="2026-02-19 13:31:25.322926899 +0000 UTC m=+1299.984030137" observedRunningTime="2026-02-19 13:31:25.6296624 +0000 UTC m=+1300.290765668" watchObservedRunningTime="2026-02-19 13:31:25.635315692 +0000 UTC m=+1300.296418920" Feb 19 13:31:27 crc kubenswrapper[4861]: I0219 13:31:27.780356 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:31:27 crc kubenswrapper[4861]: I0219 13:31:27.781103 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-log" containerID="cri-o://5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214" gracePeriod=30 Feb 19 13:31:27 crc kubenswrapper[4861]: I0219 13:31:27.781191 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-httpd" containerID="cri-o://5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74" gracePeriod=30 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.439904 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.553371 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-combined-ca-bundle\") pod \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.553458 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtt45\" (UniqueName: \"kubernetes.io/projected/274eb243-db3d-4ad8-b2cd-2ff23017ac82-kube-api-access-xtt45\") pod \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.553484 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-ovndb-tls-certs\") pod \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.553504 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-config\") pod \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.553524 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-httpd-config\") pod \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\" (UID: \"274eb243-db3d-4ad8-b2cd-2ff23017ac82\") " Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.558940 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274eb243-db3d-4ad8-b2cd-2ff23017ac82-kube-api-access-xtt45" (OuterVolumeSpecName: "kube-api-access-xtt45") pod "274eb243-db3d-4ad8-b2cd-2ff23017ac82" (UID: "274eb243-db3d-4ad8-b2cd-2ff23017ac82"). InnerVolumeSpecName "kube-api-access-xtt45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.577683 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "274eb243-db3d-4ad8-b2cd-2ff23017ac82" (UID: "274eb243-db3d-4ad8-b2cd-2ff23017ac82"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.598243 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.598545 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-central-agent" containerID="cri-o://b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" gracePeriod=30 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.598673 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="proxy-httpd" containerID="cri-o://e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" gracePeriod=30 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.598717 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="sg-core" containerID="cri-o://f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" gracePeriod=30 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.598747 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-notification-agent" containerID="cri-o://3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" gracePeriod=30 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.633824 4861 generic.go:334] "Generic (PLEG): container finished" podID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerID="5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214" exitCode=143 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.633924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bebc25b-fd66-4fca-9a39-d54671b7492d","Type":"ContainerDied","Data":"5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214"} Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.638065 4861 generic.go:334] "Generic (PLEG): container finished" podID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerID="f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc" exitCode=0 Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.638111 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f6c7cb7cb-j5rts" event={"ID":"274eb243-db3d-4ad8-b2cd-2ff23017ac82","Type":"ContainerDied","Data":"f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc"} Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.638138 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f6c7cb7cb-j5rts" event={"ID":"274eb243-db3d-4ad8-b2cd-2ff23017ac82","Type":"ContainerDied","Data":"14f7e3c2a46854cc6166efc5500aa811c8356a8b962e981c0dbe82271f0ce5aa"} Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.638157 4861 scope.go:117] "RemoveContainer" containerID="9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.638280 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f6c7cb7cb-j5rts" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.643710 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-config" (OuterVolumeSpecName: "config") pod "274eb243-db3d-4ad8-b2cd-2ff23017ac82" (UID: "274eb243-db3d-4ad8-b2cd-2ff23017ac82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.659091 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtt45\" (UniqueName: \"kubernetes.io/projected/274eb243-db3d-4ad8-b2cd-2ff23017ac82-kube-api-access-xtt45\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.659135 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.659151 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.679031 4861 scope.go:117] "RemoveContainer" containerID="f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.679656 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "274eb243-db3d-4ad8-b2cd-2ff23017ac82" (UID: "274eb243-db3d-4ad8-b2cd-2ff23017ac82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.692601 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "274eb243-db3d-4ad8-b2cd-2ff23017ac82" (UID: "274eb243-db3d-4ad8-b2cd-2ff23017ac82"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.705972 4861 scope.go:117] "RemoveContainer" containerID="9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d" Feb 19 13:31:28 crc kubenswrapper[4861]: E0219 13:31:28.706518 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d\": container with ID starting with 9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d not found: ID does not exist" containerID="9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.706594 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d"} err="failed to get container status \"9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d\": rpc error: code = NotFound desc = could not find container \"9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d\": container with ID starting with 9145adafce0a5981761b601ce041e4352346c5b58735195c99676fafdb13bd2d not found: ID does not exist" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.706637 4861 scope.go:117] "RemoveContainer" containerID="f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc" Feb 19 13:31:28 crc kubenswrapper[4861]: E0219 13:31:28.707146 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc\": container with ID starting with f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc not found: ID does not exist" containerID="f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.707199 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc"} err="failed to get container status \"f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc\": rpc error: code = NotFound desc = could not find container \"f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc\": container with ID starting with f0e6f82bcb7ad21d253aa3c7c3261f5590962a4dc4158811c6572695c774d0dc not found: ID does not exist" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.761081 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.761127 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/274eb243-db3d-4ad8-b2cd-2ff23017ac82-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.976065 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f6c7cb7cb-j5rts"] Feb 19 13:31:28 crc kubenswrapper[4861]: I0219 13:31:28.984008 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f6c7cb7cb-j5rts"] Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.363119 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.460823 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.461047 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-log" containerID="cri-o://6a550e43b1af7f7d21f03635868d9dd6ef349c47147f1011f4a120cce15d8b69" gracePeriod=30 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.461176 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-httpd" containerID="cri-o://c862500249e4abdc0b9a4e4969de8d52ba9eeb7e3d69d071ec6f4c12254fee12" gracePeriod=30 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474536 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-run-httpd\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474652 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-sg-core-conf-yaml\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474722 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-scripts\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474796 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rtr\" (UniqueName: \"kubernetes.io/projected/9fcfada1-405f-4c06-a7a8-749daafe8297-kube-api-access-v8rtr\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474814 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-config-data\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474845 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-combined-ca-bundle\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.474865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-log-httpd\") pod \"9fcfada1-405f-4c06-a7a8-749daafe8297\" (UID: \"9fcfada1-405f-4c06-a7a8-749daafe8297\") " Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.476617 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.482466 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-scripts" (OuterVolumeSpecName: "scripts") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.482729 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.482822 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fcfada1-405f-4c06-a7a8-749daafe8297-kube-api-access-v8rtr" (OuterVolumeSpecName: "kube-api-access-v8rtr") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "kube-api-access-v8rtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.517095 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.555144 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.574873 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-config-data" (OuterVolumeSpecName: "config-data") pod "9fcfada1-405f-4c06-a7a8-749daafe8297" (UID: "9fcfada1-405f-4c06-a7a8-749daafe8297"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576615 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576703 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rtr\" (UniqueName: \"kubernetes.io/projected/9fcfada1-405f-4c06-a7a8-749daafe8297-kube-api-access-v8rtr\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576773 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576827 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576877 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576932 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fcfada1-405f-4c06-a7a8-749daafe8297-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.576982 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fcfada1-405f-4c06-a7a8-749daafe8297-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.646797 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" exitCode=0 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647012 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" exitCode=2 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647068 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" exitCode=0 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647124 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" exitCode=0 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.646917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerDied","Data":"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae"} Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647300 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerDied","Data":"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53"} Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.646900 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerDied","Data":"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa"} Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647469 4861 scope.go:117] "RemoveContainer" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647482 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerDied","Data":"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642"} Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.647688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fcfada1-405f-4c06-a7a8-749daafe8297","Type":"ContainerDied","Data":"4ca3a0f33a48f8e1f037e9ed4fe87ce81ecee55638cac99bd39ea4b438483725"} Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.653651 4861 generic.go:334] "Generic (PLEG): container finished" podID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerID="6a550e43b1af7f7d21f03635868d9dd6ef349c47147f1011f4a120cce15d8b69" exitCode=143 Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.653720 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69db72c-2c51-47ba-b2a1-037d6b259176","Type":"ContainerDied","Data":"6a550e43b1af7f7d21f03635868d9dd6ef349c47147f1011f4a120cce15d8b69"} Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.671156 4861 scope.go:117] "RemoveContainer" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.685695 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.694461 4861 scope.go:117] "RemoveContainer" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.696172 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.710909 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.711272 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="proxy-httpd" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711291 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="proxy-httpd" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.711311 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-notification-agent" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711318 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-notification-agent" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.711341 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-api" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711347 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-api" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.711360 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-central-agent" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711366 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-central-agent" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.711382 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="sg-core" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711388 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="sg-core" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.711404 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-httpd" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711409 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-httpd" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711574 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="sg-core" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711591 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-httpd" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711601 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" containerName="neutron-api" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711613 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-central-agent" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711622 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="proxy-httpd" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.711632 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" containerName="ceilometer-notification-agent" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.713087 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.716714 4861 scope.go:117] "RemoveContainer" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.717064 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.717143 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.718024 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.754545 4861 scope.go:117] "RemoveContainer" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.754961 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": container with ID starting with e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae not found: ID does not exist" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.755007 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae"} err="failed to get container status \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": rpc error: code = NotFound desc = could not find container \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": container with ID starting with e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.755038 4861 scope.go:117] "RemoveContainer" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.755513 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": container with ID starting with f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53 not found: ID does not exist" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.755563 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53"} err="failed to get container status \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": rpc error: code = NotFound desc = could not find container \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": container with ID starting with f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.755587 4861 scope.go:117] "RemoveContainer" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.756178 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": container with ID starting with 3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa not found: ID does not exist" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756204 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa"} err="failed to get container status \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": rpc error: code = NotFound desc = could not find container \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": container with ID starting with 3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756219 4861 scope.go:117] "RemoveContainer" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" Feb 19 13:31:29 crc kubenswrapper[4861]: E0219 13:31:29.756396 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": container with ID starting with b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642 not found: ID does not exist" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756415 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642"} err="failed to get container status \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": rpc error: code = NotFound desc = could not find container \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": container with ID starting with b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756550 4861 scope.go:117] "RemoveContainer" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756739 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae"} err="failed to get container status \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": rpc error: code = NotFound desc = could not find container \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": container with ID starting with e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756758 4861 scope.go:117] "RemoveContainer" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756905 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53"} err="failed to get container status \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": rpc error: code = NotFound desc = could not find container \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": container with ID starting with f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.756920 4861 scope.go:117] "RemoveContainer" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757045 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa"} err="failed to get container status \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": rpc error: code = NotFound desc = could not find container \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": container with ID starting with 3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757063 4861 scope.go:117] "RemoveContainer" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757194 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642"} err="failed to get container status \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": rpc error: code = NotFound desc = could not find container \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": container with ID starting with b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757211 4861 scope.go:117] "RemoveContainer" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757335 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae"} err="failed to get container status \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": rpc error: code = NotFound desc = could not find container \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": container with ID starting with e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757350 4861 scope.go:117] "RemoveContainer" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757561 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53"} err="failed to get container status \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": rpc error: code = NotFound desc = could not find container \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": container with ID starting with f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757578 4861 scope.go:117] "RemoveContainer" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757698 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa"} err="failed to get container status \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": rpc error: code = NotFound desc = could not find container \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": container with ID starting with 3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757714 4861 scope.go:117] "RemoveContainer" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757835 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642"} err="failed to get container status \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": rpc error: code = NotFound desc = could not find container \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": container with ID starting with b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757849 4861 scope.go:117] "RemoveContainer" containerID="e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757971 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae"} err="failed to get container status \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": rpc error: code = NotFound desc = could not find container \"e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae\": container with ID starting with e996398f4aa45d1224962e6f79cc441a2cbdc7f500240b52994e4a3444a870ae not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.757987 4861 scope.go:117] "RemoveContainer" containerID="f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.758115 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53"} err="failed to get container status \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": rpc error: code = NotFound desc = could not find container \"f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53\": container with ID starting with f7fccdd4dcccda119f0ec29fed9740c41d8b57f40d3c18e04e7c4f4c32731f53 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.758132 4861 scope.go:117] "RemoveContainer" containerID="3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.758254 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa"} err="failed to get container status \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": rpc error: code = NotFound desc = could not find container \"3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa\": container with ID starting with 3dde2559924410ba8c24a16ce9076f6fbf099a131a05e9cdfe45097e54c6ccaa not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.758269 4861 scope.go:117] "RemoveContainer" containerID="b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.758388 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642"} err="failed to get container status \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": rpc error: code = NotFound desc = could not find container \"b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642\": container with ID starting with b08cf09ad9865db2e345a7da4039e6e73b3a6dda5e4d2819ca75f7b0f3649642 not found: ID does not exist" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-scripts\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780218 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780284 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jkp\" (UniqueName: \"kubernetes.io/projected/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-kube-api-access-k2jkp\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-config-data\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-run-httpd\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.780394 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-log-httpd\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881552 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881578 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2jkp\" (UniqueName: \"kubernetes.io/projected/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-kube-api-access-k2jkp\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881599 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-config-data\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881638 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-run-httpd\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881665 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-log-httpd\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.881701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-scripts\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.882306 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-run-httpd\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.882400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-log-httpd\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.885403 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.885521 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-scripts\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.885538 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.887241 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-config-data\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.897116 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2jkp\" (UniqueName: \"kubernetes.io/projected/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-kube-api-access-k2jkp\") pod \"ceilometer-0\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " pod="openstack/ceilometer-0" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.990131 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274eb243-db3d-4ad8-b2cd-2ff23017ac82" path="/var/lib/kubelet/pods/274eb243-db3d-4ad8-b2cd-2ff23017ac82/volumes" Feb 19 13:31:29 crc kubenswrapper[4861]: I0219 13:31:29.991014 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fcfada1-405f-4c06-a7a8-749daafe8297" path="/var/lib/kubelet/pods/9fcfada1-405f-4c06-a7a8-749daafe8297/volumes" Feb 19 13:31:30 crc kubenswrapper[4861]: I0219 13:31:30.043855 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:30 crc kubenswrapper[4861]: I0219 13:31:30.572249 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:30 crc kubenswrapper[4861]: I0219 13:31:30.664994 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerStarted","Data":"e7222908f20fd6c3a3f944cd7932218f984a2b87a89fb92d25d57357faf003fc"} Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.552521 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.562888 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615582 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-config-data\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615669 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615720 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-logs\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615741 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7jx4\" (UniqueName: \"kubernetes.io/projected/3bebc25b-fd66-4fca-9a39-d54671b7492d-kube-api-access-b7jx4\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615760 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-httpd-run\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615799 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-scripts\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-combined-ca-bundle\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.615935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-public-tls-certs\") pod \"3bebc25b-fd66-4fca-9a39-d54671b7492d\" (UID: \"3bebc25b-fd66-4fca-9a39-d54671b7492d\") " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.621808 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.623952 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.627088 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bebc25b-fd66-4fca-9a39-d54671b7492d-kube-api-access-b7jx4" (OuterVolumeSpecName: "kube-api-access-b7jx4") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "kube-api-access-b7jx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.627771 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-logs" (OuterVolumeSpecName: "logs") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.644566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-scripts" (OuterVolumeSpecName: "scripts") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.658098 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.699223 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerStarted","Data":"38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0"} Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.705357 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-config-data" (OuterVolumeSpecName: "config-data") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.707861 4861 generic.go:334] "Generic (PLEG): container finished" podID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerID="5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74" exitCode=0 Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.708020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bebc25b-fd66-4fca-9a39-d54671b7492d","Type":"ContainerDied","Data":"5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74"} Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.708098 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bebc25b-fd66-4fca-9a39-d54671b7492d","Type":"ContainerDied","Data":"40bfd4c50d461cd0f765a1acb134bfc938422607370f99e1214bf615f877c4c4"} Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.708169 4861 scope.go:117] "RemoveContainer" containerID="5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.708345 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717771 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717812 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717823 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717832 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7jx4\" (UniqueName: \"kubernetes.io/projected/3bebc25b-fd66-4fca-9a39-d54671b7492d-kube-api-access-b7jx4\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717842 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bebc25b-fd66-4fca-9a39-d54671b7492d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717852 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.717861 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.734253 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.734698 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3bebc25b-fd66-4fca-9a39-d54671b7492d" (UID: "3bebc25b-fd66-4fca-9a39-d54671b7492d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.757860 4861 scope.go:117] "RemoveContainer" containerID="5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.781830 4861 scope.go:117] "RemoveContainer" containerID="5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74" Feb 19 13:31:31 crc kubenswrapper[4861]: E0219 13:31:31.783062 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74\": container with ID starting with 5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74 not found: ID does not exist" containerID="5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.783099 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74"} err="failed to get container status \"5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74\": rpc error: code = NotFound desc = could not find container \"5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74\": container with ID starting with 5269f73f32021b13592b2d4e295c1866edb2dd35493225237d3e3049293a6e74 not found: ID does not exist" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.783122 4861 scope.go:117] "RemoveContainer" containerID="5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214" Feb 19 13:31:31 crc kubenswrapper[4861]: E0219 13:31:31.783491 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214\": container with ID starting with 5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214 not found: ID does not exist" containerID="5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.783511 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214"} err="failed to get container status \"5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214\": rpc error: code = NotFound desc = could not find container \"5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214\": container with ID starting with 5b5531cdf673855e434a4cf5ab1491222c90ed35fa38ff9e1bf2292d59c66214 not found: ID does not exist" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.819510 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bebc25b-fd66-4fca-9a39-d54671b7492d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:31 crc kubenswrapper[4861]: I0219 13:31:31.819538 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.061570 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.069526 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.084713 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:31:32 crc kubenswrapper[4861]: E0219 13:31:32.085168 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-httpd" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.085190 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-httpd" Feb 19 13:31:32 crc kubenswrapper[4861]: E0219 13:31:32.085212 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-log" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.085219 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-log" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.085387 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-httpd" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.085462 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" containerName="glance-log" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.086350 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.088935 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.089043 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.094302 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126249 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126296 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126342 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-config-data\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126374 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-scripts\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126479 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtqc\" (UniqueName: \"kubernetes.io/projected/bce14944-29de-44e7-9ad4-bb056cc6d656-kube-api-access-5gtqc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126513 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126556 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.126581 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-logs\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228671 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-config-data\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228724 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-scripts\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228794 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gtqc\" (UniqueName: \"kubernetes.io/projected/bce14944-29de-44e7-9ad4-bb056cc6d656-kube-api-access-5gtqc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228820 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.228858 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-logs\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.229350 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-logs\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.230411 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.230514 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.233014 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-scripts\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.235105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.236232 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.238092 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-config-data\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.252281 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gtqc\" (UniqueName: \"kubernetes.io/projected/bce14944-29de-44e7-9ad4-bb056cc6d656-kube-api-access-5gtqc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.259596 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.403412 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.719957 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerStarted","Data":"f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a"} Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.720201 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerStarted","Data":"69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf"} Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.721957 4861 generic.go:334] "Generic (PLEG): container finished" podID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerID="c862500249e4abdc0b9a4e4969de8d52ba9eeb7e3d69d071ec6f4c12254fee12" exitCode=0 Feb 19 13:31:32 crc kubenswrapper[4861]: I0219 13:31:32.722000 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69db72c-2c51-47ba-b2a1-037d6b259176","Type":"ContainerDied","Data":"c862500249e4abdc0b9a4e4969de8d52ba9eeb7e3d69d071ec6f4c12254fee12"} Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.058324 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.193670 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246716 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-httpd-run\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246779 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-logs\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246837 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246873 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ggb7\" (UniqueName: \"kubernetes.io/projected/e69db72c-2c51-47ba-b2a1-037d6b259176-kube-api-access-7ggb7\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246892 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-internal-tls-certs\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246930 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-scripts\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246951 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-combined-ca-bundle\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.246975 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-config-data\") pod \"e69db72c-2c51-47ba-b2a1-037d6b259176\" (UID: \"e69db72c-2c51-47ba-b2a1-037d6b259176\") " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.247767 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.248297 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-logs" (OuterVolumeSpecName: "logs") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.295566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69db72c-2c51-47ba-b2a1-037d6b259176-kube-api-access-7ggb7" (OuterVolumeSpecName: "kube-api-access-7ggb7") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "kube-api-access-7ggb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.295598 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.301141 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.301189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-scripts" (OuterVolumeSpecName: "scripts") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.324148 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-config-data" (OuterVolumeSpecName: "config-data") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349120 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349169 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ggb7\" (UniqueName: \"kubernetes.io/projected/e69db72c-2c51-47ba-b2a1-037d6b259176-kube-api-access-7ggb7\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349179 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349189 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349197 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349220 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.349228 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69db72c-2c51-47ba-b2a1-037d6b259176-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.376635 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e69db72c-2c51-47ba-b2a1-037d6b259176" (UID: "e69db72c-2c51-47ba-b2a1-037d6b259176"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.386265 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.404983 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.459757 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.459809 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69db72c-2c51-47ba-b2a1-037d6b259176-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.811150 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69db72c-2c51-47ba-b2a1-037d6b259176","Type":"ContainerDied","Data":"04b78d7c58cef08bd7aa4566c7122087bddee8a5883fbdf28664670b483dec7b"} Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.811204 4861 scope.go:117] "RemoveContainer" containerID="c862500249e4abdc0b9a4e4969de8d52ba9eeb7e3d69d071ec6f4c12254fee12" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.811336 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.824959 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bce14944-29de-44e7-9ad4-bb056cc6d656","Type":"ContainerStarted","Data":"7d214cfacb8e7e0527d5e12ab0c121c203d401295edb904324813230f0ea46b9"} Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.864702 4861 scope.go:117] "RemoveContainer" containerID="6a550e43b1af7f7d21f03635868d9dd6ef349c47147f1011f4a120cce15d8b69" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.883071 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.919468 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.961045 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:31:33 crc kubenswrapper[4861]: E0219 13:31:33.961450 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-log" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.961463 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-log" Feb 19 13:31:33 crc kubenswrapper[4861]: E0219 13:31:33.961493 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-httpd" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.961500 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-httpd" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.961675 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-log" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.961691 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" containerName="glance-httpd" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.962568 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.966537 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.966692 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 13:31:33 crc kubenswrapper[4861]: I0219 13:31:33.975654 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.067491 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bebc25b-fd66-4fca-9a39-d54671b7492d" path="/var/lib/kubelet/pods/3bebc25b-fd66-4fca-9a39-d54671b7492d/volumes" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.068310 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e69db72c-2c51-47ba-b2a1-037d6b259176" path="/var/lib/kubelet/pods/e69db72c-2c51-47ba-b2a1-037d6b259176/volumes" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.093940 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.093984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjtx\" (UniqueName: \"kubernetes.io/projected/1da21583-02a3-4a99-a05c-976f017fb31c-kube-api-access-hvjtx\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.094005 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.094041 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.094123 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.094160 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.094198 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.094222 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.198706 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.204814 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205186 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205294 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205334 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205387 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205406 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjtx\" (UniqueName: \"kubernetes.io/projected/1da21583-02a3-4a99-a05c-976f017fb31c-kube-api-access-hvjtx\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.204748 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.205899 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.210652 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.210901 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.212097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.214592 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.234089 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.253890 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjtx\" (UniqueName: \"kubernetes.io/projected/1da21583-02a3-4a99-a05c-976f017fb31c-kube-api-access-hvjtx\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.257772 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.393642 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.836450 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerStarted","Data":"2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31"} Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.837006 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.836635 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="sg-core" containerID="cri-o://f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a" gracePeriod=30 Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.836602 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-central-agent" containerID="cri-o://38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0" gracePeriod=30 Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.836705 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="proxy-httpd" containerID="cri-o://2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31" gracePeriod=30 Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.836705 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-notification-agent" containerID="cri-o://69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf" gracePeriod=30 Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.845187 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bce14944-29de-44e7-9ad4-bb056cc6d656","Type":"ContainerStarted","Data":"87675e94528e8f6860c18ad3e351c725b983857e00216b5245b6f315c839cf6f"} Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.845240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bce14944-29de-44e7-9ad4-bb056cc6d656","Type":"ContainerStarted","Data":"a630b33298ce3a0c3f99e814f4e69c3048e04eafde31a67dbaedf03ba600019a"} Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.888040 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.888024899 podStartE2EDuration="2.888024899s" podCreationTimestamp="2026-02-19 13:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:34.886092256 +0000 UTC m=+1309.547195484" watchObservedRunningTime="2026-02-19 13:31:34.888024899 +0000 UTC m=+1309.549128127" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.889128 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.893114775 podStartE2EDuration="5.889121377s" podCreationTimestamp="2026-02-19 13:31:29 +0000 UTC" firstStartedPulling="2026-02-19 13:31:30.578307998 +0000 UTC m=+1305.239411226" lastFinishedPulling="2026-02-19 13:31:33.5743146 +0000 UTC m=+1308.235417828" observedRunningTime="2026-02-19 13:31:34.862265214 +0000 UTC m=+1309.523368442" watchObservedRunningTime="2026-02-19 13:31:34.889121377 +0000 UTC m=+1309.550224605" Feb 19 13:31:34 crc kubenswrapper[4861]: I0219 13:31:34.986759 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:31:35 crc kubenswrapper[4861]: W0219 13:31:35.002542 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da21583_02a3_4a99_a05c_976f017fb31c.slice/crio-ee863a1e4655f1635d841df5adf694e972df1bca70fb37cec46e268037b5e258 WatchSource:0}: Error finding container ee863a1e4655f1635d841df5adf694e972df1bca70fb37cec46e268037b5e258: Status 404 returned error can't find the container with id ee863a1e4655f1635d841df5adf694e972df1bca70fb37cec46e268037b5e258 Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.855817 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerID="2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31" exitCode=0 Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.856481 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerID="f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a" exitCode=2 Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.856496 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerID="69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf" exitCode=0 Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.855883 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerDied","Data":"2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31"} Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.856576 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerDied","Data":"f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a"} Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.856594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerDied","Data":"69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf"} Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.857894 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da21583-02a3-4a99-a05c-976f017fb31c","Type":"ContainerStarted","Data":"1a80821a8e4670f6f32d88965fc76093208185ba4852a863d5ea299f7223e873"} Feb 19 13:31:35 crc kubenswrapper[4861]: I0219 13:31:35.857932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da21583-02a3-4a99-a05c-976f017fb31c","Type":"ContainerStarted","Data":"ee863a1e4655f1635d841df5adf694e972df1bca70fb37cec46e268037b5e258"} Feb 19 13:31:36 crc kubenswrapper[4861]: I0219 13:31:36.867975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da21583-02a3-4a99-a05c-976f017fb31c","Type":"ContainerStarted","Data":"cf188110f03d910f2a512942393ddfa01853575e4682c8b6c95037df3b2b616f"} Feb 19 13:31:36 crc kubenswrapper[4861]: I0219 13:31:36.906131 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.9061169060000003 podStartE2EDuration="3.906116906s" podCreationTimestamp="2026-02-19 13:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:36.905397956 +0000 UTC m=+1311.566501184" watchObservedRunningTime="2026-02-19 13:31:36.906116906 +0000 UTC m=+1311.567220134" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.511252 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ae68-account-create-update-qhg57"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.513808 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.515645 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.521788 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hng7s"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.523157 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.531029 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hng7s"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.538860 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ae68-account-create-update-qhg57"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.603280 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4xvh5"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.604474 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.612571 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4xvh5"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.663170 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms42n\" (UniqueName: \"kubernetes.io/projected/5ddffdeb-5390-498e-bed8-e72fe5934034-kube-api-access-ms42n\") pod \"nova-api-db-create-hng7s\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.663239 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8pz\" (UniqueName: \"kubernetes.io/projected/a9d690b0-57b6-4544-9181-32144adaaef5-kube-api-access-dx8pz\") pod \"nova-api-ae68-account-create-update-qhg57\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.663299 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d690b0-57b6-4544-9181-32144adaaef5-operator-scripts\") pod \"nova-api-ae68-account-create-update-qhg57\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.663450 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ddffdeb-5390-498e-bed8-e72fe5934034-operator-scripts\") pod \"nova-api-db-create-hng7s\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.711527 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9594-account-create-update-vdzdj"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.712823 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.715275 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.723255 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-vdzdj"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.765710 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d690b0-57b6-4544-9181-32144adaaef5-operator-scripts\") pod \"nova-api-ae68-account-create-update-qhg57\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.765835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ddffdeb-5390-498e-bed8-e72fe5934034-operator-scripts\") pod \"nova-api-db-create-hng7s\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.765943 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qlqz\" (UniqueName: \"kubernetes.io/projected/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-kube-api-access-4qlqz\") pod \"nova-cell0-db-create-4xvh5\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.766000 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms42n\" (UniqueName: \"kubernetes.io/projected/5ddffdeb-5390-498e-bed8-e72fe5934034-kube-api-access-ms42n\") pod \"nova-api-db-create-hng7s\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.766064 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx8pz\" (UniqueName: \"kubernetes.io/projected/a9d690b0-57b6-4544-9181-32144adaaef5-kube-api-access-dx8pz\") pod \"nova-api-ae68-account-create-update-qhg57\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.766121 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-operator-scripts\") pod \"nova-cell0-db-create-4xvh5\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.766848 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ddffdeb-5390-498e-bed8-e72fe5934034-operator-scripts\") pod \"nova-api-db-create-hng7s\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.767630 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d690b0-57b6-4544-9181-32144adaaef5-operator-scripts\") pod \"nova-api-ae68-account-create-update-qhg57\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.787556 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms42n\" (UniqueName: \"kubernetes.io/projected/5ddffdeb-5390-498e-bed8-e72fe5934034-kube-api-access-ms42n\") pod \"nova-api-db-create-hng7s\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.793299 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx8pz\" (UniqueName: \"kubernetes.io/projected/a9d690b0-57b6-4544-9181-32144adaaef5-kube-api-access-dx8pz\") pod \"nova-api-ae68-account-create-update-qhg57\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.807198 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-92l52"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.808257 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.817527 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-92l52"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.851124 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.854439 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.867703 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b04719-3c5c-48e9-b2d0-84e8111b020b-operator-scripts\") pod \"nova-cell0-9594-account-create-update-vdzdj\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.867776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtsd\" (UniqueName: \"kubernetes.io/projected/d2b04719-3c5c-48e9-b2d0-84e8111b020b-kube-api-access-2rtsd\") pod \"nova-cell0-9594-account-create-update-vdzdj\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.867817 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qlqz\" (UniqueName: \"kubernetes.io/projected/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-kube-api-access-4qlqz\") pod \"nova-cell0-db-create-4xvh5\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.868214 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-operator-scripts\") pod \"nova-cell0-db-create-4xvh5\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.869705 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-operator-scripts\") pod \"nova-cell0-db-create-4xvh5\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.895094 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qlqz\" (UniqueName: \"kubernetes.io/projected/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-kube-api-access-4qlqz\") pod \"nova-cell0-db-create-4xvh5\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.916388 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-n5s9g"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.917529 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.919957 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.925069 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-n5s9g"] Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.928987 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.969567 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-operator-scripts\") pod \"nova-cell1-db-create-92l52\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.969603 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b547r\" (UniqueName: \"kubernetes.io/projected/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-kube-api-access-b547r\") pod \"nova-cell1-db-create-92l52\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.974507 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b04719-3c5c-48e9-b2d0-84e8111b020b-operator-scripts\") pod \"nova-cell0-9594-account-create-update-vdzdj\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.974795 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtsd\" (UniqueName: \"kubernetes.io/projected/d2b04719-3c5c-48e9-b2d0-84e8111b020b-kube-api-access-2rtsd\") pod \"nova-cell0-9594-account-create-update-vdzdj\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:41 crc kubenswrapper[4861]: I0219 13:31:41.975706 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b04719-3c5c-48e9-b2d0-84e8111b020b-operator-scripts\") pod \"nova-cell0-9594-account-create-update-vdzdj\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.006026 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtsd\" (UniqueName: \"kubernetes.io/projected/d2b04719-3c5c-48e9-b2d0-84e8111b020b-kube-api-access-2rtsd\") pod \"nova-cell0-9594-account-create-update-vdzdj\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.027346 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.076980 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws2x\" (UniqueName: \"kubernetes.io/projected/0543bf80-4d09-4c45-897d-3b2ae4291861-kube-api-access-xws2x\") pod \"nova-cell1-6dfc-account-create-update-n5s9g\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.077065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0543bf80-4d09-4c45-897d-3b2ae4291861-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-n5s9g\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.077174 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-operator-scripts\") pod \"nova-cell1-db-create-92l52\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.077195 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b547r\" (UniqueName: \"kubernetes.io/projected/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-kube-api-access-b547r\") pod \"nova-cell1-db-create-92l52\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.081320 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-operator-scripts\") pod \"nova-cell1-db-create-92l52\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.097477 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b547r\" (UniqueName: \"kubernetes.io/projected/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-kube-api-access-b547r\") pod \"nova-cell1-db-create-92l52\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.172154 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.178579 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xws2x\" (UniqueName: \"kubernetes.io/projected/0543bf80-4d09-4c45-897d-3b2ae4291861-kube-api-access-xws2x\") pod \"nova-cell1-6dfc-account-create-update-n5s9g\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.178662 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0543bf80-4d09-4c45-897d-3b2ae4291861-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-n5s9g\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.179340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0543bf80-4d09-4c45-897d-3b2ae4291861-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-n5s9g\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.195043 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xws2x\" (UniqueName: \"kubernetes.io/projected/0543bf80-4d09-4c45-897d-3b2ae4291861-kube-api-access-xws2x\") pod \"nova-cell1-6dfc-account-create-update-n5s9g\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.346714 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hng7s"] Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.346871 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.404562 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.404633 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.444403 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ae68-account-create-update-qhg57"] Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.453718 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.498445 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.608590 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-vdzdj"] Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.616392 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4xvh5"] Feb 19 13:31:42 crc kubenswrapper[4861]: W0219 13:31:42.622772 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2b04719_3c5c_48e9_b2d0_84e8111b020b.slice/crio-f9b9bd6f9b0f4cb75d6c1a47c44214ecc2e98fb0deb05d754829f9bb9214de32 WatchSource:0}: Error finding container f9b9bd6f9b0f4cb75d6c1a47c44214ecc2e98fb0deb05d754829f9bb9214de32: Status 404 returned error can't find the container with id f9b9bd6f9b0f4cb75d6c1a47c44214ecc2e98fb0deb05d754829f9bb9214de32 Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.728097 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-92l52"] Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.907100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-n5s9g"] Feb 19 13:31:42 crc kubenswrapper[4861]: W0219 13:31:42.940731 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice/crio-64f85be0bea461a4207e38b5a90a66445fbb4cf3374c359218c3a2137dbf7e32 WatchSource:0}: Error finding container 64f85be0bea461a4207e38b5a90a66445fbb4cf3374c359218c3a2137dbf7e32: Status 404 returned error can't find the container with id 64f85be0bea461a4207e38b5a90a66445fbb4cf3374c359218c3a2137dbf7e32 Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.941122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" event={"ID":"d2b04719-3c5c-48e9-b2d0-84e8111b020b","Type":"ContainerStarted","Data":"6379d90fb7d2e327bbfc0bbe51c70ca3c8fdf3884260c546325c64753da97180"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.941172 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" event={"ID":"d2b04719-3c5c-48e9-b2d0-84e8111b020b","Type":"ContainerStarted","Data":"f9b9bd6f9b0f4cb75d6c1a47c44214ecc2e98fb0deb05d754829f9bb9214de32"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.943587 4861 generic.go:334] "Generic (PLEG): container finished" podID="5ddffdeb-5390-498e-bed8-e72fe5934034" containerID="e7d391aacd8499dedae619c2280ad3ec2ac0938d27356e8042553e2777b7c08a" exitCode=0 Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.943661 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hng7s" event={"ID":"5ddffdeb-5390-498e-bed8-e72fe5934034","Type":"ContainerDied","Data":"e7d391aacd8499dedae619c2280ad3ec2ac0938d27356e8042553e2777b7c08a"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.943680 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hng7s" event={"ID":"5ddffdeb-5390-498e-bed8-e72fe5934034","Type":"ContainerStarted","Data":"c32a3d848f76cf7c1ad827ae4957b4e19d33e188ad2fe3fc72cd4113a5b5ee8f"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.945128 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4xvh5" event={"ID":"34b85ee6-f9f6-4f1e-8fc9-23072e437a14","Type":"ContainerStarted","Data":"0c061181cb17f27f6ef8b1302359d02e2616c5bc5016fb409d477a8a73115613"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.945162 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4xvh5" event={"ID":"34b85ee6-f9f6-4f1e-8fc9-23072e437a14","Type":"ContainerStarted","Data":"86d03f9877a302613d02e0647c9e08add5a9daf43faf279a42b790a09fcdf0c6"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.946033 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92l52" event={"ID":"411cd56f-4fb3-4f9b-9cfe-e287f22a4609","Type":"ContainerStarted","Data":"f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.949468 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae68-account-create-update-qhg57" event={"ID":"a9d690b0-57b6-4544-9181-32144adaaef5","Type":"ContainerStarted","Data":"a508d171f352d010ce973346dc81c47126d3ac2c349161f151df000ba6cd1e90"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.949626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae68-account-create-update-qhg57" event={"ID":"a9d690b0-57b6-4544-9181-32144adaaef5","Type":"ContainerStarted","Data":"1818d2160903abcb5e1911cefa13158edb7a212a9c2e980166a29e5ab20e5693"} Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.949693 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.949763 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:31:42 crc kubenswrapper[4861]: I0219 13:31:42.963971 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" podStartSLOduration=1.9639505339999999 podStartE2EDuration="1.963950534s" podCreationTimestamp="2026-02-19 13:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:42.962489924 +0000 UTC m=+1317.623593162" watchObservedRunningTime="2026-02-19 13:31:42.963950534 +0000 UTC m=+1317.625053762" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.010761 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-4xvh5" podStartSLOduration=2.010728893 podStartE2EDuration="2.010728893s" podCreationTimestamp="2026-02-19 13:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:43.005465861 +0000 UTC m=+1317.666569089" watchObservedRunningTime="2026-02-19 13:31:43.010728893 +0000 UTC m=+1317.671832121" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.035759 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ae68-account-create-update-qhg57" podStartSLOduration=2.035732507 podStartE2EDuration="2.035732507s" podCreationTimestamp="2026-02-19 13:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:31:43.018990576 +0000 UTC m=+1317.680093814" watchObservedRunningTime="2026-02-19 13:31:43.035732507 +0000 UTC m=+1317.696835735" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.543395 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.645925 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-sg-core-conf-yaml\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.646767 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2jkp\" (UniqueName: \"kubernetes.io/projected/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-kube-api-access-k2jkp\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.646948 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-log-httpd\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.646991 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-config-data\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.647009 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-run-httpd\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.647060 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-scripts\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.647076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-combined-ca-bundle\") pod \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\" (UID: \"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce\") " Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.647603 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.648177 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.648874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.654964 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-scripts" (OuterVolumeSpecName: "scripts") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.654979 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-kube-api-access-k2jkp" (OuterVolumeSpecName: "kube-api-access-k2jkp") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "kube-api-access-k2jkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.692548 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.724806 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.742323 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-config-data" (OuterVolumeSpecName: "config-data") pod "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" (UID: "a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.750277 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.750305 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.750316 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.750324 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.750334 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.750342 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2jkp\" (UniqueName: \"kubernetes.io/projected/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce-kube-api-access-k2jkp\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.982628 4861 generic.go:334] "Generic (PLEG): container finished" podID="d2b04719-3c5c-48e9-b2d0-84e8111b020b" containerID="6379d90fb7d2e327bbfc0bbe51c70ca3c8fdf3884260c546325c64753da97180" exitCode=0 Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.986269 4861 generic.go:334] "Generic (PLEG): container finished" podID="34b85ee6-f9f6-4f1e-8fc9-23072e437a14" containerID="0c061181cb17f27f6ef8b1302359d02e2616c5bc5016fb409d477a8a73115613" exitCode=0 Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.989979 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" event={"ID":"d2b04719-3c5c-48e9-b2d0-84e8111b020b","Type":"ContainerDied","Data":"6379d90fb7d2e327bbfc0bbe51c70ca3c8fdf3884260c546325c64753da97180"} Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.990015 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4xvh5" event={"ID":"34b85ee6-f9f6-4f1e-8fc9-23072e437a14","Type":"ContainerDied","Data":"0c061181cb17f27f6ef8b1302359d02e2616c5bc5016fb409d477a8a73115613"} Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.996798 4861 generic.go:334] "Generic (PLEG): container finished" podID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerID="38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0" exitCode=0 Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.997042 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerDied","Data":"38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0"} Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.997173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce","Type":"ContainerDied","Data":"e7222908f20fd6c3a3f944cd7932218f984a2b87a89fb92d25d57357faf003fc"} Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.997196 4861 scope.go:117] "RemoveContainer" containerID="2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31" Feb 19 13:31:43 crc kubenswrapper[4861]: I0219 13:31:43.998108 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.020458 4861 generic.go:334] "Generic (PLEG): container finished" podID="0543bf80-4d09-4c45-897d-3b2ae4291861" containerID="8354b2098a238ae5e4107f91ec8969097ba6d956d629a96860766ef3f619f4ab" exitCode=0 Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.020633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" event={"ID":"0543bf80-4d09-4c45-897d-3b2ae4291861","Type":"ContainerDied","Data":"8354b2098a238ae5e4107f91ec8969097ba6d956d629a96860766ef3f619f4ab"} Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.020656 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" event={"ID":"0543bf80-4d09-4c45-897d-3b2ae4291861","Type":"ContainerStarted","Data":"64f85be0bea461a4207e38b5a90a66445fbb4cf3374c359218c3a2137dbf7e32"} Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.026528 4861 generic.go:334] "Generic (PLEG): container finished" podID="411cd56f-4fb3-4f9b-9cfe-e287f22a4609" containerID="78fdee9081121e0ab0ba03328fa869438307e3d0a9114e86bc894de3b9889286" exitCode=0 Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.026933 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92l52" event={"ID":"411cd56f-4fb3-4f9b-9cfe-e287f22a4609","Type":"ContainerDied","Data":"78fdee9081121e0ab0ba03328fa869438307e3d0a9114e86bc894de3b9889286"} Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.031392 4861 generic.go:334] "Generic (PLEG): container finished" podID="a9d690b0-57b6-4544-9181-32144adaaef5" containerID="a508d171f352d010ce973346dc81c47126d3ac2c349161f151df000ba6cd1e90" exitCode=0 Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.031558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae68-account-create-update-qhg57" event={"ID":"a9d690b0-57b6-4544-9181-32144adaaef5","Type":"ContainerDied","Data":"a508d171f352d010ce973346dc81c47126d3ac2c349161f151df000ba6cd1e90"} Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.034219 4861 scope.go:117] "RemoveContainer" containerID="f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.087063 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.098185 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.098358 4861 scope.go:117] "RemoveContainer" containerID="69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.133149 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.133784 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-notification-agent" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.133806 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-notification-agent" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.133828 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="proxy-httpd" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.133838 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="proxy-httpd" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.133853 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="sg-core" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.133863 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="sg-core" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.133889 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-central-agent" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.133899 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-central-agent" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.133977 4861 scope.go:117] "RemoveContainer" containerID="38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.134176 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-notification-agent" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.134221 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="ceilometer-central-agent" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.134239 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="sg-core" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.134256 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" containerName="proxy-httpd" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.137208 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.140730 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.153937 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.157103 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.171098 4861 scope.go:117] "RemoveContainer" containerID="2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.177184 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31\": container with ID starting with 2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31 not found: ID does not exist" containerID="2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.177240 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31"} err="failed to get container status \"2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31\": rpc error: code = NotFound desc = could not find container \"2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31\": container with ID starting with 2e2228cd01971526f07fc9a2ee98edf18c8b8f5bd990d0b59acd0e2274858b31 not found: ID does not exist" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.177273 4861 scope.go:117] "RemoveContainer" containerID="f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.180536 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a\": container with ID starting with f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a not found: ID does not exist" containerID="f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.180578 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a"} err="failed to get container status \"f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a\": rpc error: code = NotFound desc = could not find container \"f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a\": container with ID starting with f74ddd4acb24114e03cbddeb40f2d0098c3223e3c57590dfbd14852e0820902a not found: ID does not exist" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.180609 4861 scope.go:117] "RemoveContainer" containerID="69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.180918 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf\": container with ID starting with 69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf not found: ID does not exist" containerID="69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.180948 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf"} err="failed to get container status \"69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf\": rpc error: code = NotFound desc = could not find container \"69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf\": container with ID starting with 69cfab983b383145fc4e9d40244936e702215b794d2f7d72d67cbb11233cbcaf not found: ID does not exist" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.180965 4861 scope.go:117] "RemoveContainer" containerID="38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0" Feb 19 13:31:44 crc kubenswrapper[4861]: E0219 13:31:44.181447 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0\": container with ID starting with 38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0 not found: ID does not exist" containerID="38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.181488 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0"} err="failed to get container status \"38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0\": rpc error: code = NotFound desc = could not find container \"38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0\": container with ID starting with 38afb6f502107d3a3b2b50e43e9293b58f354dc95beabb063b0bc1395aba92a0 not found: ID does not exist" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261245 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-run-httpd\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261304 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261335 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-log-httpd\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261358 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-scripts\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261390 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-config-data\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxz8\" (UniqueName: \"kubernetes.io/projected/08808261-a739-4e69-8547-a8a20728dcae-kube-api-access-prxz8\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.261562 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.362971 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-run-httpd\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.363006 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.363029 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-log-httpd\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.363047 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-scripts\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.363073 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-config-data\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.363117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxz8\" (UniqueName: \"kubernetes.io/projected/08808261-a739-4e69-8547-a8a20728dcae-kube-api-access-prxz8\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.363185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.367836 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.368104 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-run-httpd\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.368133 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-log-httpd\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.368907 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-scripts\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.369658 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.372571 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-config-data\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.393715 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.399744 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.399896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxz8\" (UniqueName: \"kubernetes.io/projected/08808261-a739-4e69-8547-a8a20728dcae-kube-api-access-prxz8\") pod \"ceilometer-0\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.461116 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.468679 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.469803 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.478575 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.566161 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ddffdeb-5390-498e-bed8-e72fe5934034-operator-scripts\") pod \"5ddffdeb-5390-498e-bed8-e72fe5934034\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.566306 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms42n\" (UniqueName: \"kubernetes.io/projected/5ddffdeb-5390-498e-bed8-e72fe5934034-kube-api-access-ms42n\") pod \"5ddffdeb-5390-498e-bed8-e72fe5934034\" (UID: \"5ddffdeb-5390-498e-bed8-e72fe5934034\") " Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.568644 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddffdeb-5390-498e-bed8-e72fe5934034-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ddffdeb-5390-498e-bed8-e72fe5934034" (UID: "5ddffdeb-5390-498e-bed8-e72fe5934034"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.571487 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddffdeb-5390-498e-bed8-e72fe5934034-kube-api-access-ms42n" (OuterVolumeSpecName: "kube-api-access-ms42n") pod "5ddffdeb-5390-498e-bed8-e72fe5934034" (UID: "5ddffdeb-5390-498e-bed8-e72fe5934034"). InnerVolumeSpecName "kube-api-access-ms42n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.668603 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ddffdeb-5390-498e-bed8-e72fe5934034-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.668640 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms42n\" (UniqueName: \"kubernetes.io/projected/5ddffdeb-5390-498e-bed8-e72fe5934034-kube-api-access-ms42n\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:44 crc kubenswrapper[4861]: I0219 13:31:44.958642 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.061094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hng7s" event={"ID":"5ddffdeb-5390-498e-bed8-e72fe5934034","Type":"ContainerDied","Data":"c32a3d848f76cf7c1ad827ae4957b4e19d33e188ad2fe3fc72cd4113a5b5ee8f"} Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.061139 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32a3d848f76cf7c1ad827ae4957b4e19d33e188ad2fe3fc72cd4113a5b5ee8f" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.061199 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hng7s" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.064594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerStarted","Data":"2dd79d2e974196f0a0f6c5a8ed1a657833df5da3781df13e925a986b668f29b2"} Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.064686 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.064702 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.065931 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.065998 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.233921 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.256270 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.604405 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.651289 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.661753 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.691973 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.702387 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.789598 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b04719-3c5c-48e9-b2d0-84e8111b020b-operator-scripts\") pod \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.789709 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0543bf80-4d09-4c45-897d-3b2ae4291861-operator-scripts\") pod \"0543bf80-4d09-4c45-897d-3b2ae4291861\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.789821 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xws2x\" (UniqueName: \"kubernetes.io/projected/0543bf80-4d09-4c45-897d-3b2ae4291861-kube-api-access-xws2x\") pod \"0543bf80-4d09-4c45-897d-3b2ae4291861\" (UID: \"0543bf80-4d09-4c45-897d-3b2ae4291861\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.789856 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qlqz\" (UniqueName: \"kubernetes.io/projected/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-kube-api-access-4qlqz\") pod \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.789885 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtsd\" (UniqueName: \"kubernetes.io/projected/d2b04719-3c5c-48e9-b2d0-84e8111b020b-kube-api-access-2rtsd\") pod \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\" (UID: \"d2b04719-3c5c-48e9-b2d0-84e8111b020b\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.789958 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx8pz\" (UniqueName: \"kubernetes.io/projected/a9d690b0-57b6-4544-9181-32144adaaef5-kube-api-access-dx8pz\") pod \"a9d690b0-57b6-4544-9181-32144adaaef5\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.790054 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-operator-scripts\") pod \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\" (UID: \"34b85ee6-f9f6-4f1e-8fc9-23072e437a14\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.790092 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d690b0-57b6-4544-9181-32144adaaef5-operator-scripts\") pod \"a9d690b0-57b6-4544-9181-32144adaaef5\" (UID: \"a9d690b0-57b6-4544-9181-32144adaaef5\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.790584 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b04719-3c5c-48e9-b2d0-84e8111b020b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2b04719-3c5c-48e9-b2d0-84e8111b020b" (UID: "d2b04719-3c5c-48e9-b2d0-84e8111b020b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.791214 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d690b0-57b6-4544-9181-32144adaaef5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9d690b0-57b6-4544-9181-32144adaaef5" (UID: "a9d690b0-57b6-4544-9181-32144adaaef5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.791354 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0543bf80-4d09-4c45-897d-3b2ae4291861-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0543bf80-4d09-4c45-897d-3b2ae4291861" (UID: "0543bf80-4d09-4c45-897d-3b2ae4291861"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.792024 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0543bf80-4d09-4c45-897d-3b2ae4291861-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.792045 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d690b0-57b6-4544-9181-32144adaaef5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.792056 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b04719-3c5c-48e9-b2d0-84e8111b020b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.792354 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34b85ee6-f9f6-4f1e-8fc9-23072e437a14" (UID: "34b85ee6-f9f6-4f1e-8fc9-23072e437a14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.793792 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d690b0-57b6-4544-9181-32144adaaef5-kube-api-access-dx8pz" (OuterVolumeSpecName: "kube-api-access-dx8pz") pod "a9d690b0-57b6-4544-9181-32144adaaef5" (UID: "a9d690b0-57b6-4544-9181-32144adaaef5"). InnerVolumeSpecName "kube-api-access-dx8pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.796925 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0543bf80-4d09-4c45-897d-3b2ae4291861-kube-api-access-xws2x" (OuterVolumeSpecName: "kube-api-access-xws2x") pod "0543bf80-4d09-4c45-897d-3b2ae4291861" (UID: "0543bf80-4d09-4c45-897d-3b2ae4291861"). InnerVolumeSpecName "kube-api-access-xws2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.805449 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-kube-api-access-4qlqz" (OuterVolumeSpecName: "kube-api-access-4qlqz") pod "34b85ee6-f9f6-4f1e-8fc9-23072e437a14" (UID: "34b85ee6-f9f6-4f1e-8fc9-23072e437a14"). InnerVolumeSpecName "kube-api-access-4qlqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.810651 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b04719-3c5c-48e9-b2d0-84e8111b020b-kube-api-access-2rtsd" (OuterVolumeSpecName: "kube-api-access-2rtsd") pod "d2b04719-3c5c-48e9-b2d0-84e8111b020b" (UID: "d2b04719-3c5c-48e9-b2d0-84e8111b020b"). InnerVolumeSpecName "kube-api-access-2rtsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.893199 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-operator-scripts\") pod \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.893325 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b547r\" (UniqueName: \"kubernetes.io/projected/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-kube-api-access-b547r\") pod \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\" (UID: \"411cd56f-4fb3-4f9b-9cfe-e287f22a4609\") " Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.893662 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "411cd56f-4fb3-4f9b-9cfe-e287f22a4609" (UID: "411cd56f-4fb3-4f9b-9cfe-e287f22a4609"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.894136 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xws2x\" (UniqueName: \"kubernetes.io/projected/0543bf80-4d09-4c45-897d-3b2ae4291861-kube-api-access-xws2x\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.894158 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qlqz\" (UniqueName: \"kubernetes.io/projected/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-kube-api-access-4qlqz\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.894167 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtsd\" (UniqueName: \"kubernetes.io/projected/d2b04719-3c5c-48e9-b2d0-84e8111b020b-kube-api-access-2rtsd\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.894177 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx8pz\" (UniqueName: \"kubernetes.io/projected/a9d690b0-57b6-4544-9181-32144adaaef5-kube-api-access-dx8pz\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.894188 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.894197 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b85ee6-f9f6-4f1e-8fc9-23072e437a14-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.901259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-kube-api-access-b547r" (OuterVolumeSpecName: "kube-api-access-b547r") pod "411cd56f-4fb3-4f9b-9cfe-e287f22a4609" (UID: "411cd56f-4fb3-4f9b-9cfe-e287f22a4609"). InnerVolumeSpecName "kube-api-access-b547r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:45 crc kubenswrapper[4861]: I0219 13:31:45.995971 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b547r\" (UniqueName: \"kubernetes.io/projected/411cd56f-4fb3-4f9b-9cfe-e287f22a4609-kube-api-access-b547r\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.002676 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce" path="/var/lib/kubelet/pods/a6f048a9-1f06-458f-a8ac-3ce0fc3e3fce/volumes" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.095858 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" event={"ID":"d2b04719-3c5c-48e9-b2d0-84e8111b020b","Type":"ContainerDied","Data":"f9b9bd6f9b0f4cb75d6c1a47c44214ecc2e98fb0deb05d754829f9bb9214de32"} Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.096528 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9b9bd6f9b0f4cb75d6c1a47c44214ecc2e98fb0deb05d754829f9bb9214de32" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.095886 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-vdzdj" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.097911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerStarted","Data":"75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390"} Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.108271 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4xvh5" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.108382 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4xvh5" event={"ID":"34b85ee6-f9f6-4f1e-8fc9-23072e437a14","Type":"ContainerDied","Data":"86d03f9877a302613d02e0647c9e08add5a9daf43faf279a42b790a09fcdf0c6"} Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.108410 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d03f9877a302613d02e0647c9e08add5a9daf43faf279a42b790a09fcdf0c6" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.110394 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.110541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6dfc-account-create-update-n5s9g" event={"ID":"0543bf80-4d09-4c45-897d-3b2ae4291861","Type":"ContainerDied","Data":"64f85be0bea461a4207e38b5a90a66445fbb4cf3374c359218c3a2137dbf7e32"} Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.110579 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f85be0bea461a4207e38b5a90a66445fbb4cf3374c359218c3a2137dbf7e32" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.112657 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92l52" event={"ID":"411cd56f-4fb3-4f9b-9cfe-e287f22a4609","Type":"ContainerDied","Data":"f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04"} Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.112693 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.112752 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92l52" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.116524 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae68-account-create-update-qhg57" event={"ID":"a9d690b0-57b6-4544-9181-32144adaaef5","Type":"ContainerDied","Data":"1818d2160903abcb5e1911cefa13158edb7a212a9c2e980166a29e5ab20e5693"} Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.116573 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1818d2160903abcb5e1911cefa13158edb7a212a9c2e980166a29e5ab20e5693" Feb 19 13:31:46 crc kubenswrapper[4861]: I0219 13:31:46.116650 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-qhg57" Feb 19 13:31:46 crc kubenswrapper[4861]: E0219 13:31:46.881764 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice/crio-f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b85ee6_f9f6_4f1e_8fc9_23072e437a14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d690b0_57b6_4544_9181_32144adaaef5.slice\": RecentStats: unable to find data in memory cache]" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.138747 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.139065 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.175541 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4q89d"] Feb 19 13:31:47 crc kubenswrapper[4861]: E0219 13:31:47.175909 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411cd56f-4fb3-4f9b-9cfe-e287f22a4609" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.175926 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="411cd56f-4fb3-4f9b-9cfe-e287f22a4609" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: E0219 13:31:47.175939 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddffdeb-5390-498e-bed8-e72fe5934034" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.175945 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddffdeb-5390-498e-bed8-e72fe5934034" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: E0219 13:31:47.175960 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b85ee6-f9f6-4f1e-8fc9-23072e437a14" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.175966 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b85ee6-f9f6-4f1e-8fc9-23072e437a14" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: E0219 13:31:47.175979 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b04719-3c5c-48e9-b2d0-84e8111b020b" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.175987 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b04719-3c5c-48e9-b2d0-84e8111b020b" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: E0219 13:31:47.176002 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0543bf80-4d09-4c45-897d-3b2ae4291861" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176009 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0543bf80-4d09-4c45-897d-3b2ae4291861" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: E0219 13:31:47.176026 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d690b0-57b6-4544-9181-32144adaaef5" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176034 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d690b0-57b6-4544-9181-32144adaaef5" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176192 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b04719-3c5c-48e9-b2d0-84e8111b020b" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176204 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddffdeb-5390-498e-bed8-e72fe5934034" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176213 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d690b0-57b6-4544-9181-32144adaaef5" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176225 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b85ee6-f9f6-4f1e-8fc9-23072e437a14" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176235 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="411cd56f-4fb3-4f9b-9cfe-e287f22a4609" containerName="mariadb-database-create" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.176246 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0543bf80-4d09-4c45-897d-3b2ae4291861" containerName="mariadb-account-create-update" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.177781 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.180956 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.181192 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-t9k9n" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.181544 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.188121 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4q89d"] Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.288011 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.290880 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.323960 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-config-data\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.324023 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.324076 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-scripts\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.324116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlbxc\" (UniqueName: \"kubernetes.io/projected/ffcfa3dd-97f3-4975-bb69-25a4031896a7-kube-api-access-rlbxc\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.426485 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-config-data\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.426544 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.426586 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-scripts\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.426624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlbxc\" (UniqueName: \"kubernetes.io/projected/ffcfa3dd-97f3-4975-bb69-25a4031896a7-kube-api-access-rlbxc\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.432757 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-config-data\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.433343 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-scripts\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.442357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.458576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlbxc\" (UniqueName: \"kubernetes.io/projected/ffcfa3dd-97f3-4975-bb69-25a4031896a7-kube-api-access-rlbxc\") pod \"nova-cell0-conductor-db-sync-4q89d\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.508834 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:31:47 crc kubenswrapper[4861]: I0219 13:31:47.794615 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4q89d"] Feb 19 13:31:47 crc kubenswrapper[4861]: W0219 13:31:47.812693 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcfa3dd_97f3_4975_bb69_25a4031896a7.slice/crio-91533982b33247bc64612af7a4b6a6afa1c517de9f00dd25240c7887e6ecbe68 WatchSource:0}: Error finding container 91533982b33247bc64612af7a4b6a6afa1c517de9f00dd25240c7887e6ecbe68: Status 404 returned error can't find the container with id 91533982b33247bc64612af7a4b6a6afa1c517de9f00dd25240c7887e6ecbe68 Feb 19 13:31:48 crc kubenswrapper[4861]: I0219 13:31:48.177449 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerStarted","Data":"c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea"} Feb 19 13:31:48 crc kubenswrapper[4861]: I0219 13:31:48.185744 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4q89d" event={"ID":"ffcfa3dd-97f3-4975-bb69-25a4031896a7","Type":"ContainerStarted","Data":"91533982b33247bc64612af7a4b6a6afa1c517de9f00dd25240c7887e6ecbe68"} Feb 19 13:31:49 crc kubenswrapper[4861]: I0219 13:31:49.198239 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerStarted","Data":"e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7"} Feb 19 13:31:50 crc kubenswrapper[4861]: I0219 13:31:50.213700 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerStarted","Data":"08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428"} Feb 19 13:31:50 crc kubenswrapper[4861]: I0219 13:31:50.214351 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:31:50 crc kubenswrapper[4861]: I0219 13:31:50.245048 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.64560093 podStartE2EDuration="6.245026263s" podCreationTimestamp="2026-02-19 13:31:44 +0000 UTC" firstStartedPulling="2026-02-19 13:31:44.963243544 +0000 UTC m=+1319.624346772" lastFinishedPulling="2026-02-19 13:31:49.562668877 +0000 UTC m=+1324.223772105" observedRunningTime="2026-02-19 13:31:50.23970549 +0000 UTC m=+1324.900808738" watchObservedRunningTime="2026-02-19 13:31:50.245026263 +0000 UTC m=+1324.906129502" Feb 19 13:31:56 crc kubenswrapper[4861]: I0219 13:31:56.282054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4q89d" event={"ID":"ffcfa3dd-97f3-4975-bb69-25a4031896a7","Type":"ContainerStarted","Data":"82ceb2b562584835e768713baafa8271c5b94b45905d32c6aecbc0eec99445bc"} Feb 19 13:31:56 crc kubenswrapper[4861]: I0219 13:31:56.313187 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4q89d" podStartSLOduration=1.6605325130000002 podStartE2EDuration="9.313166429s" podCreationTimestamp="2026-02-19 13:31:47 +0000 UTC" firstStartedPulling="2026-02-19 13:31:47.824896479 +0000 UTC m=+1322.485999707" lastFinishedPulling="2026-02-19 13:31:55.477530395 +0000 UTC m=+1330.138633623" observedRunningTime="2026-02-19 13:31:56.299186943 +0000 UTC m=+1330.960290251" watchObservedRunningTime="2026-02-19 13:31:56.313166429 +0000 UTC m=+1330.974269667" Feb 19 13:31:57 crc kubenswrapper[4861]: E0219 13:31:57.134267 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b85ee6_f9f6_4f1e_8fc9_23072e437a14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d690b0_57b6_4544_9181_32144adaaef5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice/crio-f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04\": RecentStats: unable to find data in memory cache]" Feb 19 13:32:00 crc kubenswrapper[4861]: I0219 13:32:00.396683 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:00 crc kubenswrapper[4861]: I0219 13:32:00.397207 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-central-agent" containerID="cri-o://75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390" gracePeriod=30 Feb 19 13:32:00 crc kubenswrapper[4861]: I0219 13:32:00.397815 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="proxy-httpd" containerID="cri-o://08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428" gracePeriod=30 Feb 19 13:32:00 crc kubenswrapper[4861]: I0219 13:32:00.397885 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-notification-agent" containerID="cri-o://c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea" gracePeriod=30 Feb 19 13:32:00 crc kubenswrapper[4861]: I0219 13:32:00.398021 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="sg-core" containerID="cri-o://e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7" gracePeriod=30 Feb 19 13:32:00 crc kubenswrapper[4861]: I0219 13:32:00.404035 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 13:32:01 crc kubenswrapper[4861]: I0219 13:32:01.326740 4861 generic.go:334] "Generic (PLEG): container finished" podID="08808261-a739-4e69-8547-a8a20728dcae" containerID="08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428" exitCode=0 Feb 19 13:32:01 crc kubenswrapper[4861]: I0219 13:32:01.327072 4861 generic.go:334] "Generic (PLEG): container finished" podID="08808261-a739-4e69-8547-a8a20728dcae" containerID="e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7" exitCode=2 Feb 19 13:32:01 crc kubenswrapper[4861]: I0219 13:32:01.327083 4861 generic.go:334] "Generic (PLEG): container finished" podID="08808261-a739-4e69-8547-a8a20728dcae" containerID="75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390" exitCode=0 Feb 19 13:32:01 crc kubenswrapper[4861]: I0219 13:32:01.326830 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerDied","Data":"08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428"} Feb 19 13:32:01 crc kubenswrapper[4861]: I0219 13:32:01.327131 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerDied","Data":"e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7"} Feb 19 13:32:01 crc kubenswrapper[4861]: I0219 13:32:01.327147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerDied","Data":"75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390"} Feb 19 13:32:03 crc kubenswrapper[4861]: I0219 13:32:03.994775 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.097736 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-combined-ca-bundle\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.097919 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-sg-core-conf-yaml\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.098042 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-config-data\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.098077 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-scripts\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.099163 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-log-httpd\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.099229 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-run-httpd\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.099271 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxz8\" (UniqueName: \"kubernetes.io/projected/08808261-a739-4e69-8547-a8a20728dcae-kube-api-access-prxz8\") pod \"08808261-a739-4e69-8547-a8a20728dcae\" (UID: \"08808261-a739-4e69-8547-a8a20728dcae\") " Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.100165 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.100357 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.101493 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.101525 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08808261-a739-4e69-8547-a8a20728dcae-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.132698 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08808261-a739-4e69-8547-a8a20728dcae-kube-api-access-prxz8" (OuterVolumeSpecName: "kube-api-access-prxz8") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "kube-api-access-prxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.138458 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.146553 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-scripts" (OuterVolumeSpecName: "scripts") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.182970 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.203414 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.203469 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.203491 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.203519 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxz8\" (UniqueName: \"kubernetes.io/projected/08808261-a739-4e69-8547-a8a20728dcae-kube-api-access-prxz8\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.204770 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-config-data" (OuterVolumeSpecName: "config-data") pod "08808261-a739-4e69-8547-a8a20728dcae" (UID: "08808261-a739-4e69-8547-a8a20728dcae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.305271 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08808261-a739-4e69-8547-a8a20728dcae-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.364005 4861 generic.go:334] "Generic (PLEG): container finished" podID="08808261-a739-4e69-8547-a8a20728dcae" containerID="c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea" exitCode=0 Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.364101 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.364113 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerDied","Data":"c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea"} Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.364578 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08808261-a739-4e69-8547-a8a20728dcae","Type":"ContainerDied","Data":"2dd79d2e974196f0a0f6c5a8ed1a657833df5da3781df13e925a986b668f29b2"} Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.364609 4861 scope.go:117] "RemoveContainer" containerID="08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.389529 4861 scope.go:117] "RemoveContainer" containerID="e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.415188 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.434081 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.455491 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.455947 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="sg-core" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.455969 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="sg-core" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.455993 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-central-agent" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456001 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-central-agent" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.456009 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="proxy-httpd" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456014 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="proxy-httpd" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.456029 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-notification-agent" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456036 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-notification-agent" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456190 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="sg-core" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456203 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-central-agent" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456217 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="proxy-httpd" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.456229 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="08808261-a739-4e69-8547-a8a20728dcae" containerName="ceilometer-notification-agent" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.458076 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.462852 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.463080 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.479379 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.492519 4861 scope.go:117] "RemoveContainer" containerID="c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.535951 4861 scope.go:117] "RemoveContainer" containerID="75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.558658 4861 scope.go:117] "RemoveContainer" containerID="08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.559149 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428\": container with ID starting with 08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428 not found: ID does not exist" containerID="08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.559202 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428"} err="failed to get container status \"08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428\": rpc error: code = NotFound desc = could not find container \"08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428\": container with ID starting with 08813d4f0c6a2870f323e14c470dd433d13ca939fe79bb473ed94a4f0b550428 not found: ID does not exist" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.559241 4861 scope.go:117] "RemoveContainer" containerID="e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.560993 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7\": container with ID starting with e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7 not found: ID does not exist" containerID="e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.561022 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7"} err="failed to get container status \"e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7\": rpc error: code = NotFound desc = could not find container \"e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7\": container with ID starting with e2eb4761e8f00d5f9507af670310e358ff25fb6d9817dc3703734430192af0d7 not found: ID does not exist" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.561044 4861 scope.go:117] "RemoveContainer" containerID="c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.561522 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea\": container with ID starting with c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea not found: ID does not exist" containerID="c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.561561 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea"} err="failed to get container status \"c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea\": rpc error: code = NotFound desc = could not find container \"c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea\": container with ID starting with c5e73447a5275a2e532fba8b444a153c866b2666841dc1c149e1c398bf49f3ea not found: ID does not exist" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.561593 4861 scope.go:117] "RemoveContainer" containerID="75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390" Feb 19 13:32:04 crc kubenswrapper[4861]: E0219 13:32:04.561932 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390\": container with ID starting with 75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390 not found: ID does not exist" containerID="75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.561962 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390"} err="failed to get container status \"75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390\": rpc error: code = NotFound desc = could not find container \"75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390\": container with ID starting with 75fb92997e4b9d9c55385da21547c0f0684680d166b3b9530e53a4126c909390 not found: ID does not exist" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612030 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612248 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-run-httpd\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612452 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-log-httpd\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612503 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-scripts\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612616 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-config-data\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.612786 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nngf\" (UniqueName: \"kubernetes.io/projected/2b6831f5-40f9-460d-a745-a809871692f0-kube-api-access-4nngf\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.714583 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nngf\" (UniqueName: \"kubernetes.io/projected/2b6831f5-40f9-460d-a745-a809871692f0-kube-api-access-4nngf\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.714777 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.715624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.715660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-run-httpd\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.715728 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-log-httpd\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.715753 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-scripts\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.715801 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-config-data\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.717492 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-run-httpd\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.718085 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-log-httpd\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.718292 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.720687 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="32264ced-78d0-432c-8dba-6b312fc09f77" containerName="kube-state-metrics" containerID="cri-o://62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9" gracePeriod=30 Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.721482 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-config-data\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.722179 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.722526 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.722950 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-scripts\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.739193 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nngf\" (UniqueName: \"kubernetes.io/projected/2b6831f5-40f9-460d-a745-a809871692f0-kube-api-access-4nngf\") pod \"ceilometer-0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " pod="openstack/ceilometer-0" Feb 19 13:32:04 crc kubenswrapper[4861]: I0219 13:32:04.804683 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.259188 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.307320 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.386543 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerStarted","Data":"f89e7380dd15a952b46762276bd9452943dc17ef3f59c3118e8d643813a1341e"} Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.391074 4861 generic.go:334] "Generic (PLEG): container finished" podID="32264ced-78d0-432c-8dba-6b312fc09f77" containerID="62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9" exitCode=2 Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.391135 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.391177 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"32264ced-78d0-432c-8dba-6b312fc09f77","Type":"ContainerDied","Data":"62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9"} Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.391217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"32264ced-78d0-432c-8dba-6b312fc09f77","Type":"ContainerDied","Data":"57d910cb094ac9896f74ae53dabe0533e9460862653d2ae68745d105c8885e01"} Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.391236 4861 scope.go:117] "RemoveContainer" containerID="62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.427773 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kjfl\" (UniqueName: \"kubernetes.io/projected/32264ced-78d0-432c-8dba-6b312fc09f77-kube-api-access-6kjfl\") pod \"32264ced-78d0-432c-8dba-6b312fc09f77\" (UID: \"32264ced-78d0-432c-8dba-6b312fc09f77\") " Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.429645 4861 scope.go:117] "RemoveContainer" containerID="62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9" Feb 19 13:32:05 crc kubenswrapper[4861]: E0219 13:32:05.430027 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9\": container with ID starting with 62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9 not found: ID does not exist" containerID="62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.430054 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9"} err="failed to get container status \"62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9\": rpc error: code = NotFound desc = could not find container \"62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9\": container with ID starting with 62f0a4a189e08ccc12765f4ced72626a238814e917ebc946e838985fe50407f9 not found: ID does not exist" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.433528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32264ced-78d0-432c-8dba-6b312fc09f77-kube-api-access-6kjfl" (OuterVolumeSpecName: "kube-api-access-6kjfl") pod "32264ced-78d0-432c-8dba-6b312fc09f77" (UID: "32264ced-78d0-432c-8dba-6b312fc09f77"). InnerVolumeSpecName "kube-api-access-6kjfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.529525 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kjfl\" (UniqueName: \"kubernetes.io/projected/32264ced-78d0-432c-8dba-6b312fc09f77-kube-api-access-6kjfl\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.831361 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.841326 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.849164 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:32:05 crc kubenswrapper[4861]: E0219 13:32:05.849587 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32264ced-78d0-432c-8dba-6b312fc09f77" containerName="kube-state-metrics" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.849603 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="32264ced-78d0-432c-8dba-6b312fc09f77" containerName="kube-state-metrics" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.849787 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="32264ced-78d0-432c-8dba-6b312fc09f77" containerName="kube-state-metrics" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.850387 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.851979 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.852545 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.857337 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.935604 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44q94\" (UniqueName: \"kubernetes.io/projected/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-api-access-44q94\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.935666 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.935837 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.935929 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.990730 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08808261-a739-4e69-8547-a8a20728dcae" path="/var/lib/kubelet/pods/08808261-a739-4e69-8547-a8a20728dcae/volumes" Feb 19 13:32:05 crc kubenswrapper[4861]: I0219 13:32:05.991495 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32264ced-78d0-432c-8dba-6b312fc09f77" path="/var/lib/kubelet/pods/32264ced-78d0-432c-8dba-6b312fc09f77/volumes" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.037514 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.037715 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44q94\" (UniqueName: \"kubernetes.io/projected/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-api-access-44q94\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.038465 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.039095 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.044010 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.048733 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.056682 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44q94\" (UniqueName: \"kubernetes.io/projected/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-api-access-44q94\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.057213 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.177975 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.423931 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerStarted","Data":"9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6"} Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.722323 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:32:06 crc kubenswrapper[4861]: W0219 13:32:06.724116 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26816cde_a8b6_41a2_ab12_46f8aeebbb0d.slice/crio-7cd39315016fcfc501940883ccb5c45657e85891f3e447edd02bda2d924716d5 WatchSource:0}: Error finding container 7cd39315016fcfc501940883ccb5c45657e85891f3e447edd02bda2d924716d5: Status 404 returned error can't find the container with id 7cd39315016fcfc501940883ccb5c45657e85891f3e447edd02bda2d924716d5 Feb 19 13:32:06 crc kubenswrapper[4861]: I0219 13:32:06.891656 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:07 crc kubenswrapper[4861]: E0219 13:32:07.366307 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b85ee6_f9f6_4f1e_8fc9_23072e437a14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d690b0_57b6_4544_9181_32144adaaef5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice/crio-f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04\": RecentStats: unable to find data in memory cache]" Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.433208 4861 generic.go:334] "Generic (PLEG): container finished" podID="ffcfa3dd-97f3-4975-bb69-25a4031896a7" containerID="82ceb2b562584835e768713baafa8271c5b94b45905d32c6aecbc0eec99445bc" exitCode=0 Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.433279 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4q89d" event={"ID":"ffcfa3dd-97f3-4975-bb69-25a4031896a7","Type":"ContainerDied","Data":"82ceb2b562584835e768713baafa8271c5b94b45905d32c6aecbc0eec99445bc"} Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.434589 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26816cde-a8b6-41a2-ab12-46f8aeebbb0d","Type":"ContainerStarted","Data":"2440c29ebc8715777e8f388a09da85afbe4b4ab4c17d69eb424c41362f7ff115"} Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.434622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26816cde-a8b6-41a2-ab12-46f8aeebbb0d","Type":"ContainerStarted","Data":"7cd39315016fcfc501940883ccb5c45657e85891f3e447edd02bda2d924716d5"} Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.434730 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.436488 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerStarted","Data":"399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d"} Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.436514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerStarted","Data":"56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7"} Feb 19 13:32:07 crc kubenswrapper[4861]: I0219 13:32:07.477479 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.978902208 podStartE2EDuration="2.477459694s" podCreationTimestamp="2026-02-19 13:32:05 +0000 UTC" firstStartedPulling="2026-02-19 13:32:06.736170281 +0000 UTC m=+1341.397273509" lastFinishedPulling="2026-02-19 13:32:07.234727767 +0000 UTC m=+1341.895830995" observedRunningTime="2026-02-19 13:32:07.466777287 +0000 UTC m=+1342.127880525" watchObservedRunningTime="2026-02-19 13:32:07.477459694 +0000 UTC m=+1342.138562922" Feb 19 13:32:08 crc kubenswrapper[4861]: I0219 13:32:08.941662 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.001025 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-scripts\") pod \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.001527 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-combined-ca-bundle\") pod \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.001610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlbxc\" (UniqueName: \"kubernetes.io/projected/ffcfa3dd-97f3-4975-bb69-25a4031896a7-kube-api-access-rlbxc\") pod \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.001642 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-config-data\") pod \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\" (UID: \"ffcfa3dd-97f3-4975-bb69-25a4031896a7\") " Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.005630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-scripts" (OuterVolumeSpecName: "scripts") pod "ffcfa3dd-97f3-4975-bb69-25a4031896a7" (UID: "ffcfa3dd-97f3-4975-bb69-25a4031896a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.011303 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcfa3dd-97f3-4975-bb69-25a4031896a7-kube-api-access-rlbxc" (OuterVolumeSpecName: "kube-api-access-rlbxc") pod "ffcfa3dd-97f3-4975-bb69-25a4031896a7" (UID: "ffcfa3dd-97f3-4975-bb69-25a4031896a7"). InnerVolumeSpecName "kube-api-access-rlbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.040477 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffcfa3dd-97f3-4975-bb69-25a4031896a7" (UID: "ffcfa3dd-97f3-4975-bb69-25a4031896a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.065760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-config-data" (OuterVolumeSpecName: "config-data") pod "ffcfa3dd-97f3-4975-bb69-25a4031896a7" (UID: "ffcfa3dd-97f3-4975-bb69-25a4031896a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.103464 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.103536 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.103557 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlbxc\" (UniqueName: \"kubernetes.io/projected/ffcfa3dd-97f3-4975-bb69-25a4031896a7-kube-api-access-rlbxc\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.103575 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfa3dd-97f3-4975-bb69-25a4031896a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.456885 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4q89d" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.457341 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4q89d" event={"ID":"ffcfa3dd-97f3-4975-bb69-25a4031896a7","Type":"ContainerDied","Data":"91533982b33247bc64612af7a4b6a6afa1c517de9f00dd25240c7887e6ecbe68"} Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.457393 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91533982b33247bc64612af7a4b6a6afa1c517de9f00dd25240c7887e6ecbe68" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.461337 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerStarted","Data":"dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82"} Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.461545 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.461605 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="sg-core" containerID="cri-o://399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d" gracePeriod=30 Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.461600 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="proxy-httpd" containerID="cri-o://dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82" gracePeriod=30 Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.461643 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-notification-agent" containerID="cri-o://56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7" gracePeriod=30 Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.461803 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-central-agent" containerID="cri-o://9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6" gracePeriod=30 Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.494274 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.911685988 podStartE2EDuration="5.494256997s" podCreationTimestamp="2026-02-19 13:32:04 +0000 UTC" firstStartedPulling="2026-02-19 13:32:05.261638612 +0000 UTC m=+1339.922741840" lastFinishedPulling="2026-02-19 13:32:08.844209621 +0000 UTC m=+1343.505312849" observedRunningTime="2026-02-19 13:32:09.48807783 +0000 UTC m=+1344.149181058" watchObservedRunningTime="2026-02-19 13:32:09.494256997 +0000 UTC m=+1344.155360225" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.584006 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:32:09 crc kubenswrapper[4861]: E0219 13:32:09.584388 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcfa3dd-97f3-4975-bb69-25a4031896a7" containerName="nova-cell0-conductor-db-sync" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.584406 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcfa3dd-97f3-4975-bb69-25a4031896a7" containerName="nova-cell0-conductor-db-sync" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.584638 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcfa3dd-97f3-4975-bb69-25a4031896a7" containerName="nova-cell0-conductor-db-sync" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.585324 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.587622 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-t9k9n" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.588727 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.599671 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.716232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.716322 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs78t\" (UniqueName: \"kubernetes.io/projected/c26363be-cfa7-49f5-82a2-709c67b44622-kube-api-access-hs78t\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.716429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.818838 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs78t\" (UniqueName: \"kubernetes.io/projected/c26363be-cfa7-49f5-82a2-709c67b44622-kube-api-access-hs78t\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.818992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.819967 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.825128 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.825603 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.835206 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs78t\" (UniqueName: \"kubernetes.io/projected/c26363be-cfa7-49f5-82a2-709c67b44622-kube-api-access-hs78t\") pod \"nova-cell0-conductor-0\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:09 crc kubenswrapper[4861]: I0219 13:32:09.901785 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.406364 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:32:10 crc kubenswrapper[4861]: W0219 13:32:10.414129 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26363be_cfa7_49f5_82a2_709c67b44622.slice/crio-eb2c8eb1382307580653b47152351783d1a3bc4dea4d1676e9ad1afc136cd805 WatchSource:0}: Error finding container eb2c8eb1382307580653b47152351783d1a3bc4dea4d1676e9ad1afc136cd805: Status 404 returned error can't find the container with id eb2c8eb1382307580653b47152351783d1a3bc4dea4d1676e9ad1afc136cd805 Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.485276 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b6831f5-40f9-460d-a745-a809871692f0" containerID="dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82" exitCode=0 Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.485369 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b6831f5-40f9-460d-a745-a809871692f0" containerID="399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d" exitCode=2 Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.485390 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b6831f5-40f9-460d-a745-a809871692f0" containerID="56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7" exitCode=0 Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.485506 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerDied","Data":"dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82"} Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.485599 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerDied","Data":"399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d"} Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.485626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerDied","Data":"56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7"} Feb 19 13:32:10 crc kubenswrapper[4861]: I0219 13:32:10.488106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c26363be-cfa7-49f5-82a2-709c67b44622","Type":"ContainerStarted","Data":"eb2c8eb1382307580653b47152351783d1a3bc4dea4d1676e9ad1afc136cd805"} Feb 19 13:32:11 crc kubenswrapper[4861]: I0219 13:32:11.498209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c26363be-cfa7-49f5-82a2-709c67b44622","Type":"ContainerStarted","Data":"b32d612f73bbe86a4cf54136585330c0e0f86c939e661fda48a99f88a3862277"} Feb 19 13:32:11 crc kubenswrapper[4861]: I0219 13:32:11.499951 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:11 crc kubenswrapper[4861]: I0219 13:32:11.527853 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.527829871 podStartE2EDuration="2.527829871s" podCreationTimestamp="2026-02-19 13:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:11.520016651 +0000 UTC m=+1346.181119909" watchObservedRunningTime="2026-02-19 13:32:11.527829871 +0000 UTC m=+1346.188933119" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.515260 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.519510 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b6831f5-40f9-460d-a745-a809871692f0" containerID="9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6" exitCode=0 Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.520273 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.520381 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerDied","Data":"9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6"} Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.520426 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b6831f5-40f9-460d-a745-a809871692f0","Type":"ContainerDied","Data":"f89e7380dd15a952b46762276bd9452943dc17ef3f59c3118e8d643813a1341e"} Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.520468 4861 scope.go:117] "RemoveContainer" containerID="dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.561269 4861 scope.go:117] "RemoveContainer" containerID="399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.583230 4861 scope.go:117] "RemoveContainer" containerID="56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589478 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-log-httpd\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589514 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nngf\" (UniqueName: \"kubernetes.io/projected/2b6831f5-40f9-460d-a745-a809871692f0-kube-api-access-4nngf\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589617 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-combined-ca-bundle\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-scripts\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-config-data\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-sg-core-conf-yaml\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.589810 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-run-httpd\") pod \"2b6831f5-40f9-460d-a745-a809871692f0\" (UID: \"2b6831f5-40f9-460d-a745-a809871692f0\") " Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.590060 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.590573 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.591277 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.591294 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b6831f5-40f9-460d-a745-a809871692f0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.595505 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6831f5-40f9-460d-a745-a809871692f0-kube-api-access-4nngf" (OuterVolumeSpecName: "kube-api-access-4nngf") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "kube-api-access-4nngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.596561 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-scripts" (OuterVolumeSpecName: "scripts") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.606297 4861 scope.go:117] "RemoveContainer" containerID="9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.635163 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.668920 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.694924 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nngf\" (UniqueName: \"kubernetes.io/projected/2b6831f5-40f9-460d-a745-a809871692f0-kube-api-access-4nngf\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.695010 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.695033 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.695051 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.713414 4861 scope.go:117] "RemoveContainer" containerID="dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.713984 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82\": container with ID starting with dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82 not found: ID does not exist" containerID="dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714035 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82"} err="failed to get container status \"dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82\": rpc error: code = NotFound desc = could not find container \"dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82\": container with ID starting with dec008e399d297a3e4b901e208416d8204aec495804ab06e01ce2705a214bd82 not found: ID does not exist" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714066 4861 scope.go:117] "RemoveContainer" containerID="399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.714355 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d\": container with ID starting with 399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d not found: ID does not exist" containerID="399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714380 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d"} err="failed to get container status \"399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d\": rpc error: code = NotFound desc = could not find container \"399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d\": container with ID starting with 399694809b20257ae4831c9e64a43eea5838376badd354db995943be04c9209d not found: ID does not exist" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714397 4861 scope.go:117] "RemoveContainer" containerID="56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.714682 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7\": container with ID starting with 56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7 not found: ID does not exist" containerID="56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714701 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7"} err="failed to get container status \"56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7\": rpc error: code = NotFound desc = could not find container \"56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7\": container with ID starting with 56680e4e683c83c2bc8c3fbbc296c00cb25d005a428723b5b2ce66d2198aedb7 not found: ID does not exist" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714716 4861 scope.go:117] "RemoveContainer" containerID="9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.714910 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6\": container with ID starting with 9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6 not found: ID does not exist" containerID="9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.714939 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6"} err="failed to get container status \"9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6\": rpc error: code = NotFound desc = could not find container \"9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6\": container with ID starting with 9d71d0b269a7833f2c3be42defe9c14965a894d4396f19f4f9cd7448355069a6 not found: ID does not exist" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.728931 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-config-data" (OuterVolumeSpecName: "config-data") pod "2b6831f5-40f9-460d-a745-a809871692f0" (UID: "2b6831f5-40f9-460d-a745-a809871692f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.797253 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6831f5-40f9-460d-a745-a809871692f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.897042 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.915540 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.925564 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.926006 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="sg-core" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926028 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="sg-core" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.926056 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-notification-agent" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926065 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-notification-agent" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.926081 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="proxy-httpd" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926089 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="proxy-httpd" Feb 19 13:32:13 crc kubenswrapper[4861]: E0219 13:32:13.926118 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-central-agent" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926125 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-central-agent" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926349 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-central-agent" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926372 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="ceilometer-notification-agent" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926395 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="proxy-httpd" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.926410 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6831f5-40f9-460d-a745-a809871692f0" containerName="sg-core" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.928553 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.933711 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.933928 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.934195 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.934605 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:13 crc kubenswrapper[4861]: I0219 13:32:13.999113 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6831f5-40f9-460d-a745-a809871692f0" path="/var/lib/kubelet/pods/2b6831f5-40f9-460d-a745-a809871692f0/volumes" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.000831 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-scripts\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001002 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001049 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001150 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001193 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm72m\" (UniqueName: \"kubernetes.io/projected/e2e04a54-7f65-46b4-9e5c-97e6d553064b-kube-api-access-qm72m\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001284 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001389 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-config-data\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.001461 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103512 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-scripts\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103582 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103803 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm72m\" (UniqueName: \"kubernetes.io/projected/e2e04a54-7f65-46b4-9e5c-97e6d553064b-kube-api-access-qm72m\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103839 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-config-data\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.103932 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.104860 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.104928 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.109110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.109603 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-config-data\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.110223 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.110781 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-scripts\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.115805 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.130732 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm72m\" (UniqueName: \"kubernetes.io/projected/e2e04a54-7f65-46b4-9e5c-97e6d553064b-kube-api-access-qm72m\") pod \"ceilometer-0\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.246780 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:14 crc kubenswrapper[4861]: I0219 13:32:14.764020 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:15 crc kubenswrapper[4861]: I0219 13:32:15.540773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerStarted","Data":"a40fb6522e9c85eedfa2a5f0a8f67014e57718cb9119d3ddf75d154b4c8ff4df"} Feb 19 13:32:15 crc kubenswrapper[4861]: I0219 13:32:15.541177 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerStarted","Data":"00c4e1410e31416525418b8f499f41013714761b0290eae6e13671d13a09424a"} Feb 19 13:32:16 crc kubenswrapper[4861]: I0219 13:32:16.204292 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 13:32:16 crc kubenswrapper[4861]: I0219 13:32:16.550749 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerStarted","Data":"3cd6eda07ebc0e699a84741425f20a433109b5b8fdf987994b6da72dda475590"} Feb 19 13:32:17 crc kubenswrapper[4861]: I0219 13:32:17.564049 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerStarted","Data":"cdfdd5e58b281aa0060b79241e3032905f1974b12b62735a53a6ec38a4775fac"} Feb 19 13:32:17 crc kubenswrapper[4861]: E0219 13:32:17.572137 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice/crio-f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b85ee6_f9f6_4f1e_8fc9_23072e437a14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d690b0_57b6_4544_9181_32144adaaef5.slice\": RecentStats: unable to find data in memory cache]" Feb 19 13:32:18 crc kubenswrapper[4861]: I0219 13:32:18.583993 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerStarted","Data":"d37e0f1410c63068592aa1a81ee85989589e22db3e41306607b8fa7828882334"} Feb 19 13:32:18 crc kubenswrapper[4861]: I0219 13:32:18.584617 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:32:18 crc kubenswrapper[4861]: I0219 13:32:18.607093 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.21252193 podStartE2EDuration="5.607078266s" podCreationTimestamp="2026-02-19 13:32:13 +0000 UTC" firstStartedPulling="2026-02-19 13:32:14.776172089 +0000 UTC m=+1349.437275317" lastFinishedPulling="2026-02-19 13:32:18.170728415 +0000 UTC m=+1352.831831653" observedRunningTime="2026-02-19 13:32:18.599879772 +0000 UTC m=+1353.260983010" watchObservedRunningTime="2026-02-19 13:32:18.607078266 +0000 UTC m=+1353.268181494" Feb 19 13:32:19 crc kubenswrapper[4861]: I0219 13:32:19.953181 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.467699 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-v5kmf"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.470081 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.475450 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5kmf"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.479914 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.480559 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.536849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.536985 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vh5r\" (UniqueName: \"kubernetes.io/projected/e55896ea-935d-4340-a4d6-5429eb546e83-kube-api-access-9vh5r\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.537107 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-scripts\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.537146 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.638628 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-scripts\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.638692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.638761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.638855 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vh5r\" (UniqueName: \"kubernetes.io/projected/e55896ea-935d-4340-a4d6-5429eb546e83-kube-api-access-9vh5r\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.653312 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.656052 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-scripts\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.668154 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.669394 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.683381 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.684094 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.716302 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vh5r\" (UniqueName: \"kubernetes.io/projected/e55896ea-935d-4340-a4d6-5429eb546e83-kube-api-access-9vh5r\") pod \"nova-cell0-cell-mapping-v5kmf\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.716369 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.735477 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.736828 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.740749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-config-data\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.740843 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.740876 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5n5\" (UniqueName: \"kubernetes.io/projected/acfb5225-4c50-4d7e-8008-50f204360fab-kube-api-access-9n5n5\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.741165 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.741310 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.813394 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.835495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.837040 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-logs\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842367 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842421 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb75w\" (UniqueName: \"kubernetes.io/projected/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-kube-api-access-vb75w\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842465 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-config-data\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-config-data\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842556 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.842584 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5n5\" (UniqueName: \"kubernetes.io/projected/acfb5225-4c50-4d7e-8008-50f204360fab-kube-api-access-9n5n5\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.843550 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.845310 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.856506 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.856513 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-config-data\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.880300 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5n5\" (UniqueName: \"kubernetes.io/projected/acfb5225-4c50-4d7e-8008-50f204360fab-kube-api-access-9n5n5\") pod \"nova-scheduler-0\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.880363 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849fff7679-bmrrh"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.881739 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.910045 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-bmrrh"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.930502 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.931842 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.933243 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947565 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb75w\" (UniqueName: \"kubernetes.io/projected/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-kube-api-access-vb75w\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947612 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947662 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-config-data\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947698 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947723 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947747 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-config-data\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnnw\" (UniqueName: \"kubernetes.io/projected/d937689a-e796-41e2-baeb-b5e29f737093-kube-api-access-9pnnw\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947810 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k79t6\" (UniqueName: \"kubernetes.io/projected/11eb9e69-eca0-40ab-8757-258d34511ba5-kube-api-access-k79t6\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eb9e69-eca0-40ab-8757-258d34511ba5-logs\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-logs\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947917 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-config\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947947 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.947968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-svc\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.949947 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-logs\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.959429 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.969901 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-config-data\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.976498 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:20 crc kubenswrapper[4861]: I0219 13:32:20.980033 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb75w\" (UniqueName: \"kubernetes.io/projected/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-kube-api-access-vb75w\") pod \"nova-metadata-0\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " pod="openstack/nova-metadata-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055170 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-config-data\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk96v\" (UniqueName: \"kubernetes.io/projected/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-kube-api-access-fk96v\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055350 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pnnw\" (UniqueName: \"kubernetes.io/projected/d937689a-e796-41e2-baeb-b5e29f737093-kube-api-access-9pnnw\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055519 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055560 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k79t6\" (UniqueName: \"kubernetes.io/projected/11eb9e69-eca0-40ab-8757-258d34511ba5-kube-api-access-k79t6\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055576 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eb9e69-eca0-40ab-8757-258d34511ba5-logs\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055654 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-config\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055723 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-svc\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055836 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.055876 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.056926 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eb9e69-eca0-40ab-8757-258d34511ba5-logs\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.057802 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-svc\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.058767 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.059924 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-config\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.061316 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.061443 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-config-data\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.062169 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.063873 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.065852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.084787 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.089052 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k79t6\" (UniqueName: \"kubernetes.io/projected/11eb9e69-eca0-40ab-8757-258d34511ba5-kube-api-access-k79t6\") pod \"nova-api-0\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.089215 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pnnw\" (UniqueName: \"kubernetes.io/projected/d937689a-e796-41e2-baeb-b5e29f737093-kube-api-access-9pnnw\") pod \"dnsmasq-dns-849fff7679-bmrrh\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.157421 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.157740 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.157789 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk96v\" (UniqueName: \"kubernetes.io/projected/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-kube-api-access-fk96v\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.164315 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.170955 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.181909 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk96v\" (UniqueName: \"kubernetes.io/projected/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-kube-api-access-fk96v\") pod \"nova-cell1-novncproxy-0\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.306312 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.320045 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.335413 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.368097 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5kmf"] Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.566174 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.615039 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.630583 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5kmf" event={"ID":"e55896ea-935d-4340-a4d6-5429eb546e83","Type":"ContainerStarted","Data":"2753675c87b73c505b2e8b6f36f239494fdc27de18fed5c4b6f1fd24bdb69ee4"} Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.633486 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51868560-2fb1-43dc-bdb0-0ebf4b4f173e","Type":"ContainerStarted","Data":"ec5df1a42fd97612067b47c6a75b0cb45636ce844ed5b397f8509624bec77ab9"} Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.802240 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsd6v"] Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.803611 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.807326 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.807848 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.816453 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsd6v"] Feb 19 13:32:21 crc kubenswrapper[4861]: W0219 13:32:21.852679 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11eb9e69_eca0_40ab_8757_258d34511ba5.slice/crio-fc08ef10d81f3fdfd6aa9582af2a38a569023017f60f59218037a400b7092bdd WatchSource:0}: Error finding container fc08ef10d81f3fdfd6aa9582af2a38a569023017f60f59218037a400b7092bdd: Status 404 returned error can't find the container with id fc08ef10d81f3fdfd6aa9582af2a38a569023017f60f59218037a400b7092bdd Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.858495 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.877140 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfptt\" (UniqueName: \"kubernetes.io/projected/afe8c746-882a-4160-b840-a00f2a2f267c-kube-api-access-bfptt\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.877219 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-scripts\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.877335 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-config-data\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.877521 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.959181 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-bmrrh"] Feb 19 13:32:21 crc kubenswrapper[4861]: W0219 13:32:21.963099 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd937689a_e796_41e2_baeb_b5e29f737093.slice/crio-46b943208540f8bf1e1e5963eb54c7bc11d3e7dc20403cdbfdcee17edb28f6ea WatchSource:0}: Error finding container 46b943208540f8bf1e1e5963eb54c7bc11d3e7dc20403cdbfdcee17edb28f6ea: Status 404 returned error can't find the container with id 46b943208540f8bf1e1e5963eb54c7bc11d3e7dc20403cdbfdcee17edb28f6ea Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.981656 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfptt\" (UniqueName: \"kubernetes.io/projected/afe8c746-882a-4160-b840-a00f2a2f267c-kube-api-access-bfptt\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.981716 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-scripts\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.981780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-config-data\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.981851 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.987891 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.989745 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-scripts\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:21 crc kubenswrapper[4861]: I0219 13:32:21.990285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-config-data\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.003953 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfptt\" (UniqueName: \"kubernetes.io/projected/afe8c746-882a-4160-b840-a00f2a2f267c-kube-api-access-bfptt\") pod \"nova-cell1-conductor-db-sync-bsd6v\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.062300 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.135858 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.648566 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5kmf" event={"ID":"e55896ea-935d-4340-a4d6-5429eb546e83","Type":"ContainerStarted","Data":"4dc3b378113db0a85067b6aacfc5cca8ae80446fc65f308cc2cd9c792fc8ef5d"} Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.650875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d","Type":"ContainerStarted","Data":"f12226cf719945f56c3a6262acfd18a5912a92502017058c06521a357d0d8ea6"} Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.651879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"acfb5225-4c50-4d7e-8008-50f204360fab","Type":"ContainerStarted","Data":"3185fc47539260ec1148630c1dc7a6bda1292282310cf6d1b56ea132878008e8"} Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.654391 4861 generic.go:334] "Generic (PLEG): container finished" podID="d937689a-e796-41e2-baeb-b5e29f737093" containerID="8cde963501d7b1836dc52d2fac09c21cf31f9d101ad1a755e8110d70cde7bf8f" exitCode=0 Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.655094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" event={"ID":"d937689a-e796-41e2-baeb-b5e29f737093","Type":"ContainerDied","Data":"8cde963501d7b1836dc52d2fac09c21cf31f9d101ad1a755e8110d70cde7bf8f"} Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.655151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" event={"ID":"d937689a-e796-41e2-baeb-b5e29f737093","Type":"ContainerStarted","Data":"46b943208540f8bf1e1e5963eb54c7bc11d3e7dc20403cdbfdcee17edb28f6ea"} Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.668874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eb9e69-eca0-40ab-8757-258d34511ba5","Type":"ContainerStarted","Data":"fc08ef10d81f3fdfd6aa9582af2a38a569023017f60f59218037a400b7092bdd"} Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.687400 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-v5kmf" podStartSLOduration=2.6873762389999998 podStartE2EDuration="2.687376239s" podCreationTimestamp="2026-02-19 13:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:22.667173515 +0000 UTC m=+1357.328276743" watchObservedRunningTime="2026-02-19 13:32:22.687376239 +0000 UTC m=+1357.348479467" Feb 19 13:32:22 crc kubenswrapper[4861]: I0219 13:32:22.723456 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsd6v"] Feb 19 13:32:22 crc kubenswrapper[4861]: W0219 13:32:22.732515 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe8c746_882a_4160_b840_a00f2a2f267c.slice/crio-24a47fc4d2dd7145d598ded463c8c325b16b568861eea52ca26e0a5263c49e1b WatchSource:0}: Error finding container 24a47fc4d2dd7145d598ded463c8c325b16b568861eea52ca26e0a5263c49e1b: Status 404 returned error can't find the container with id 24a47fc4d2dd7145d598ded463c8c325b16b568861eea52ca26e0a5263c49e1b Feb 19 13:32:23 crc kubenswrapper[4861]: I0219 13:32:23.684170 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" event={"ID":"afe8c746-882a-4160-b840-a00f2a2f267c","Type":"ContainerStarted","Data":"51e69d4688e12a168b13767859d4e6da331e3a50228ed704c968b8fd8f39c623"} Feb 19 13:32:23 crc kubenswrapper[4861]: I0219 13:32:23.684461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" event={"ID":"afe8c746-882a-4160-b840-a00f2a2f267c","Type":"ContainerStarted","Data":"24a47fc4d2dd7145d598ded463c8c325b16b568861eea52ca26e0a5263c49e1b"} Feb 19 13:32:23 crc kubenswrapper[4861]: I0219 13:32:23.701330 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" event={"ID":"d937689a-e796-41e2-baeb-b5e29f737093","Type":"ContainerStarted","Data":"f9b9d3b8ad83221536664dd8553d659f7808141c620c64adeb9bdfac11d98f10"} Feb 19 13:32:23 crc kubenswrapper[4861]: I0219 13:32:23.702087 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:23 crc kubenswrapper[4861]: I0219 13:32:23.722683 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" podStartSLOduration=2.722661679 podStartE2EDuration="2.722661679s" podCreationTimestamp="2026-02-19 13:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:23.717721906 +0000 UTC m=+1358.378825154" watchObservedRunningTime="2026-02-19 13:32:23.722661679 +0000 UTC m=+1358.383764917" Feb 19 13:32:23 crc kubenswrapper[4861]: I0219 13:32:23.750333 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" podStartSLOduration=3.750315024 podStartE2EDuration="3.750315024s" podCreationTimestamp="2026-02-19 13:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:23.746059199 +0000 UTC m=+1358.407162437" watchObservedRunningTime="2026-02-19 13:32:23.750315024 +0000 UTC m=+1358.411418262" Feb 19 13:32:24 crc kubenswrapper[4861]: I0219 13:32:24.241564 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:24 crc kubenswrapper[4861]: I0219 13:32:24.259798 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.734035 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"acfb5225-4c50-4d7e-8008-50f204360fab","Type":"ContainerStarted","Data":"6f4c022a28f4b0d2812661cbfc568c01f36a11f0b62808c2873c2b8399988d39"} Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.736361 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51868560-2fb1-43dc-bdb0-0ebf4b4f173e","Type":"ContainerStarted","Data":"1962e3d41ea810aad3ab4a831253aac921b27407d79269bdb28287cd3a3df113"} Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.736408 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51868560-2fb1-43dc-bdb0-0ebf4b4f173e","Type":"ContainerStarted","Data":"bed581941d63c6a624659be5dcbcd47fff7f7421027cf26d8792299321714174"} Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.736732 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-log" containerID="cri-o://bed581941d63c6a624659be5dcbcd47fff7f7421027cf26d8792299321714174" gracePeriod=30 Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.736931 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-metadata" containerID="cri-o://1962e3d41ea810aad3ab4a831253aac921b27407d79269bdb28287cd3a3df113" gracePeriod=30 Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.747183 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eb9e69-eca0-40ab-8757-258d34511ba5","Type":"ContainerStarted","Data":"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a"} Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.747245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eb9e69-eca0-40ab-8757-258d34511ba5","Type":"ContainerStarted","Data":"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077"} Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.749407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d","Type":"ContainerStarted","Data":"d62a07316db7ac318a2b912a443e0afd62e0ac6270c9486dac904b25715ae4d5"} Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.749541 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d62a07316db7ac318a2b912a443e0afd62e0ac6270c9486dac904b25715ae4d5" gracePeriod=30 Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.758748 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.357843284 podStartE2EDuration="5.7587284s" podCreationTimestamp="2026-02-19 13:32:20 +0000 UTC" firstStartedPulling="2026-02-19 13:32:21.62802055 +0000 UTC m=+1356.289123778" lastFinishedPulling="2026-02-19 13:32:25.028905666 +0000 UTC m=+1359.690008894" observedRunningTime="2026-02-19 13:32:25.755625667 +0000 UTC m=+1360.416728895" watchObservedRunningTime="2026-02-19 13:32:25.7587284 +0000 UTC m=+1360.419831628" Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.782183 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.60460843 podStartE2EDuration="5.782158902s" podCreationTimestamp="2026-02-19 13:32:20 +0000 UTC" firstStartedPulling="2026-02-19 13:32:21.854212951 +0000 UTC m=+1356.515316179" lastFinishedPulling="2026-02-19 13:32:25.031763423 +0000 UTC m=+1359.692866651" observedRunningTime="2026-02-19 13:32:25.769496671 +0000 UTC m=+1360.430599919" watchObservedRunningTime="2026-02-19 13:32:25.782158902 +0000 UTC m=+1360.443262150" Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.812262 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.385869209 podStartE2EDuration="5.812246951s" podCreationTimestamp="2026-02-19 13:32:20 +0000 UTC" firstStartedPulling="2026-02-19 13:32:21.597594801 +0000 UTC m=+1356.258698029" lastFinishedPulling="2026-02-19 13:32:25.023972523 +0000 UTC m=+1359.685075771" observedRunningTime="2026-02-19 13:32:25.806064815 +0000 UTC m=+1360.467168043" watchObservedRunningTime="2026-02-19 13:32:25.812246951 +0000 UTC m=+1360.473350179" Feb 19 13:32:25 crc kubenswrapper[4861]: I0219 13:32:25.828871 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.863688367 podStartE2EDuration="5.828855879s" podCreationTimestamp="2026-02-19 13:32:20 +0000 UTC" firstStartedPulling="2026-02-19 13:32:22.060641341 +0000 UTC m=+1356.721744569" lastFinishedPulling="2026-02-19 13:32:25.025808843 +0000 UTC m=+1359.686912081" observedRunningTime="2026-02-19 13:32:25.8188838 +0000 UTC m=+1360.479987028" watchObservedRunningTime="2026-02-19 13:32:25.828855879 +0000 UTC m=+1360.489959107" Feb 19 13:32:26 crc kubenswrapper[4861]: I0219 13:32:26.065981 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 13:32:26 crc kubenswrapper[4861]: I0219 13:32:26.085243 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:32:26 crc kubenswrapper[4861]: I0219 13:32:26.085290 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:32:26 crc kubenswrapper[4861]: I0219 13:32:26.336720 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:26 crc kubenswrapper[4861]: I0219 13:32:26.762717 4861 generic.go:334] "Generic (PLEG): container finished" podID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerID="bed581941d63c6a624659be5dcbcd47fff7f7421027cf26d8792299321714174" exitCode=143 Feb 19 13:32:26 crc kubenswrapper[4861]: I0219 13:32:26.762809 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51868560-2fb1-43dc-bdb0-0ebf4b4f173e","Type":"ContainerDied","Data":"bed581941d63c6a624659be5dcbcd47fff7f7421027cf26d8792299321714174"} Feb 19 13:32:27 crc kubenswrapper[4861]: E0219 13:32:27.807552 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b85ee6_f9f6_4f1e_8fc9_23072e437a14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d690b0_57b6_4544_9181_32144adaaef5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice/crio-f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04\": RecentStats: unable to find data in memory cache]" Feb 19 13:32:28 crc kubenswrapper[4861]: I0219 13:32:28.811044 4861 generic.go:334] "Generic (PLEG): container finished" podID="e55896ea-935d-4340-a4d6-5429eb546e83" containerID="4dc3b378113db0a85067b6aacfc5cca8ae80446fc65f308cc2cd9c792fc8ef5d" exitCode=0 Feb 19 13:32:28 crc kubenswrapper[4861]: I0219 13:32:28.811257 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5kmf" event={"ID":"e55896ea-935d-4340-a4d6-5429eb546e83","Type":"ContainerDied","Data":"4dc3b378113db0a85067b6aacfc5cca8ae80446fc65f308cc2cd9c792fc8ef5d"} Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.323019 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.472799 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-combined-ca-bundle\") pod \"e55896ea-935d-4340-a4d6-5429eb546e83\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.472897 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data\") pod \"e55896ea-935d-4340-a4d6-5429eb546e83\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.472989 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vh5r\" (UniqueName: \"kubernetes.io/projected/e55896ea-935d-4340-a4d6-5429eb546e83-kube-api-access-9vh5r\") pod \"e55896ea-935d-4340-a4d6-5429eb546e83\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.473311 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-scripts\") pod \"e55896ea-935d-4340-a4d6-5429eb546e83\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.480053 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-scripts" (OuterVolumeSpecName: "scripts") pod "e55896ea-935d-4340-a4d6-5429eb546e83" (UID: "e55896ea-935d-4340-a4d6-5429eb546e83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.481345 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55896ea-935d-4340-a4d6-5429eb546e83-kube-api-access-9vh5r" (OuterVolumeSpecName: "kube-api-access-9vh5r") pod "e55896ea-935d-4340-a4d6-5429eb546e83" (UID: "e55896ea-935d-4340-a4d6-5429eb546e83"). InnerVolumeSpecName "kube-api-access-9vh5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.511186 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e55896ea-935d-4340-a4d6-5429eb546e83" (UID: "e55896ea-935d-4340-a4d6-5429eb546e83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.574543 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data" (OuterVolumeSpecName: "config-data") pod "e55896ea-935d-4340-a4d6-5429eb546e83" (UID: "e55896ea-935d-4340-a4d6-5429eb546e83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.574849 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data\") pod \"e55896ea-935d-4340-a4d6-5429eb546e83\" (UID: \"e55896ea-935d-4340-a4d6-5429eb546e83\") " Feb 19 13:32:30 crc kubenswrapper[4861]: W0219 13:32:30.575121 4861 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e55896ea-935d-4340-a4d6-5429eb546e83/volumes/kubernetes.io~secret/config-data Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.575350 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data" (OuterVolumeSpecName: "config-data") pod "e55896ea-935d-4340-a4d6-5429eb546e83" (UID: "e55896ea-935d-4340-a4d6-5429eb546e83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.575365 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.575383 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.575397 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vh5r\" (UniqueName: \"kubernetes.io/projected/e55896ea-935d-4340-a4d6-5429eb546e83-kube-api-access-9vh5r\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.575412 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55896ea-935d-4340-a4d6-5429eb546e83-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.837095 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5kmf" event={"ID":"e55896ea-935d-4340-a4d6-5429eb546e83","Type":"ContainerDied","Data":"2753675c87b73c505b2e8b6f36f239494fdc27de18fed5c4b6f1fd24bdb69ee4"} Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.837457 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2753675c87b73c505b2e8b6f36f239494fdc27de18fed5c4b6f1fd24bdb69ee4" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.837642 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5kmf" Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.840512 4861 generic.go:334] "Generic (PLEG): container finished" podID="afe8c746-882a-4160-b840-a00f2a2f267c" containerID="51e69d4688e12a168b13767859d4e6da331e3a50228ed704c968b8fd8f39c623" exitCode=0 Feb 19 13:32:30 crc kubenswrapper[4861]: I0219 13:32:30.840562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" event={"ID":"afe8c746-882a-4160-b840-a00f2a2f267c","Type":"ContainerDied","Data":"51e69d4688e12a168b13767859d4e6da331e3a50228ed704c968b8fd8f39c623"} Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.022818 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.023152 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="acfb5225-4c50-4d7e-8008-50f204360fab" containerName="nova-scheduler-scheduler" containerID="cri-o://6f4c022a28f4b0d2812661cbfc568c01f36a11f0b62808c2873c2b8399988d39" gracePeriod=30 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.034394 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.034715 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-log" containerID="cri-o://756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077" gracePeriod=30 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.034795 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-api" containerID="cri-o://4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a" gracePeriod=30 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.322940 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.401866 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-gz67q"] Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.402134 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerName="dnsmasq-dns" containerID="cri-o://a094783e5a900406f74c352eec8e844c71c0716d149a980ae2f21d6759c518a9" gracePeriod=10 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.672846 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.696978 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-combined-ca-bundle\") pod \"11eb9e69-eca0-40ab-8757-258d34511ba5\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.697059 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eb9e69-eca0-40ab-8757-258d34511ba5-logs\") pod \"11eb9e69-eca0-40ab-8757-258d34511ba5\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.697082 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-config-data\") pod \"11eb9e69-eca0-40ab-8757-258d34511ba5\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.697115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k79t6\" (UniqueName: \"kubernetes.io/projected/11eb9e69-eca0-40ab-8757-258d34511ba5-kube-api-access-k79t6\") pod \"11eb9e69-eca0-40ab-8757-258d34511ba5\" (UID: \"11eb9e69-eca0-40ab-8757-258d34511ba5\") " Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.698664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11eb9e69-eca0-40ab-8757-258d34511ba5-logs" (OuterVolumeSpecName: "logs") pod "11eb9e69-eca0-40ab-8757-258d34511ba5" (UID: "11eb9e69-eca0-40ab-8757-258d34511ba5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.716097 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11eb9e69-eca0-40ab-8757-258d34511ba5-kube-api-access-k79t6" (OuterVolumeSpecName: "kube-api-access-k79t6") pod "11eb9e69-eca0-40ab-8757-258d34511ba5" (UID: "11eb9e69-eca0-40ab-8757-258d34511ba5"). InnerVolumeSpecName "kube-api-access-k79t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.728709 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11eb9e69-eca0-40ab-8757-258d34511ba5" (UID: "11eb9e69-eca0-40ab-8757-258d34511ba5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.730926 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-config-data" (OuterVolumeSpecName: "config-data") pod "11eb9e69-eca0-40ab-8757-258d34511ba5" (UID: "11eb9e69-eca0-40ab-8757-258d34511ba5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.799015 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k79t6\" (UniqueName: \"kubernetes.io/projected/11eb9e69-eca0-40ab-8757-258d34511ba5-kube-api-access-k79t6\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.799051 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.799063 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eb9e69-eca0-40ab-8757-258d34511ba5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.799076 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eb9e69-eca0-40ab-8757-258d34511ba5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.862976 4861 generic.go:334] "Generic (PLEG): container finished" podID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerID="4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a" exitCode=0 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.863006 4861 generic.go:334] "Generic (PLEG): container finished" podID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerID="756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077" exitCode=143 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.863042 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.863051 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eb9e69-eca0-40ab-8757-258d34511ba5","Type":"ContainerDied","Data":"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a"} Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.863074 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eb9e69-eca0-40ab-8757-258d34511ba5","Type":"ContainerDied","Data":"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077"} Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.863084 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eb9e69-eca0-40ab-8757-258d34511ba5","Type":"ContainerDied","Data":"fc08ef10d81f3fdfd6aa9582af2a38a569023017f60f59218037a400b7092bdd"} Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.863121 4861 scope.go:117] "RemoveContainer" containerID="4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.870582 4861 generic.go:334] "Generic (PLEG): container finished" podID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerID="a094783e5a900406f74c352eec8e844c71c0716d149a980ae2f21d6759c518a9" exitCode=0 Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.870669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" event={"ID":"95cf9059-55e3-41b0-8ff3-5cd158fcd643","Type":"ContainerDied","Data":"a094783e5a900406f74c352eec8e844c71c0716d149a980ae2f21d6759c518a9"} Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.903011 4861 scope.go:117] "RemoveContainer" containerID="756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.912226 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.925543 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.928390 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.929865 4861 scope.go:117] "RemoveContainer" containerID="4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a" Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.930213 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a\": container with ID starting with 4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a not found: ID does not exist" containerID="4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.930239 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a"} err="failed to get container status \"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a\": rpc error: code = NotFound desc = could not find container \"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a\": container with ID starting with 4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a not found: ID does not exist" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.930256 4861 scope.go:117] "RemoveContainer" containerID="756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077" Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.930600 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077\": container with ID starting with 756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077 not found: ID does not exist" containerID="756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.930619 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077"} err="failed to get container status \"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077\": rpc error: code = NotFound desc = could not find container \"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077\": container with ID starting with 756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077 not found: ID does not exist" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.930633 4861 scope.go:117] "RemoveContainer" containerID="4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.931483 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a"} err="failed to get container status \"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a\": rpc error: code = NotFound desc = could not find container \"4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a\": container with ID starting with 4e5155c1b59be8a964a77d35c4a29aa61758ea8fe568c18f2ee6d27b787d420a not found: ID does not exist" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.931556 4861 scope.go:117] "RemoveContainer" containerID="756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.932628 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077"} err="failed to get container status \"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077\": rpc error: code = NotFound desc = could not find container \"756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077\": container with ID starting with 756fe74c0d31f25148d640be0fb3146364c55db227cf2c7f362fb5f2f80f6077 not found: ID does not exist" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.942538 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.942943 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerName="init" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.942955 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerName="init" Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.942963 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerName="dnsmasq-dns" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.942969 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerName="dnsmasq-dns" Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.942983 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-log" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.942989 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-log" Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.943004 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55896ea-935d-4340-a4d6-5429eb546e83" containerName="nova-manage" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.943010 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55896ea-935d-4340-a4d6-5429eb546e83" containerName="nova-manage" Feb 19 13:32:31 crc kubenswrapper[4861]: E0219 13:32:31.943022 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-api" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.943027 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-api" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.943202 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55896ea-935d-4340-a4d6-5429eb546e83" containerName="nova-manage" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.943225 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" containerName="dnsmasq-dns" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.943236 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-api" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.943248 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" containerName="nova-api-log" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.944195 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.946097 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:32:31 crc kubenswrapper[4861]: I0219 13:32:31.956572 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.008502 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11eb9e69-eca0-40ab-8757-258d34511ba5" path="/var/lib/kubelet/pods/11eb9e69-eca0-40ab-8757-258d34511ba5/volumes" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.107095 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-sb\") pod \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.107205 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-config\") pod \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.107311 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-nb\") pod \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.107510 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-svc\") pod \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.107650 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w726h\" (UniqueName: \"kubernetes.io/projected/95cf9059-55e3-41b0-8ff3-5cd158fcd643-kube-api-access-w726h\") pod \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.107713 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-swift-storage-0\") pod \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\" (UID: \"95cf9059-55e3-41b0-8ff3-5cd158fcd643\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.108128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-config-data\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.108182 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drt9p\" (UniqueName: \"kubernetes.io/projected/da848cb3-a17d-465d-801f-b1bd1a0aabb5-kube-api-access-drt9p\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.108230 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da848cb3-a17d-465d-801f-b1bd1a0aabb5-logs\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.108263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.120589 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cf9059-55e3-41b0-8ff3-5cd158fcd643-kube-api-access-w726h" (OuterVolumeSpecName: "kube-api-access-w726h") pod "95cf9059-55e3-41b0-8ff3-5cd158fcd643" (UID: "95cf9059-55e3-41b0-8ff3-5cd158fcd643"). InnerVolumeSpecName "kube-api-access-w726h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.154714 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-config" (OuterVolumeSpecName: "config") pod "95cf9059-55e3-41b0-8ff3-5cd158fcd643" (UID: "95cf9059-55e3-41b0-8ff3-5cd158fcd643"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.164900 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95cf9059-55e3-41b0-8ff3-5cd158fcd643" (UID: "95cf9059-55e3-41b0-8ff3-5cd158fcd643"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.166010 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95cf9059-55e3-41b0-8ff3-5cd158fcd643" (UID: "95cf9059-55e3-41b0-8ff3-5cd158fcd643"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.175898 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95cf9059-55e3-41b0-8ff3-5cd158fcd643" (UID: "95cf9059-55e3-41b0-8ff3-5cd158fcd643"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.198734 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.200467 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95cf9059-55e3-41b0-8ff3-5cd158fcd643" (UID: "95cf9059-55e3-41b0-8ff3-5cd158fcd643"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211312 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drt9p\" (UniqueName: \"kubernetes.io/projected/da848cb3-a17d-465d-801f-b1bd1a0aabb5-kube-api-access-drt9p\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211405 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da848cb3-a17d-465d-801f-b1bd1a0aabb5-logs\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-config-data\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211712 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211724 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w726h\" (UniqueName: \"kubernetes.io/projected/95cf9059-55e3-41b0-8ff3-5cd158fcd643-kube-api-access-w726h\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211733 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211741 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211758 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.211766 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cf9059-55e3-41b0-8ff3-5cd158fcd643-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.213409 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da848cb3-a17d-465d-801f-b1bd1a0aabb5-logs\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.218475 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.221294 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-config-data\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.234712 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drt9p\" (UniqueName: \"kubernetes.io/projected/da848cb3-a17d-465d-801f-b1bd1a0aabb5-kube-api-access-drt9p\") pod \"nova-api-0\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.269851 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.312689 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfptt\" (UniqueName: \"kubernetes.io/projected/afe8c746-882a-4160-b840-a00f2a2f267c-kube-api-access-bfptt\") pod \"afe8c746-882a-4160-b840-a00f2a2f267c\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.312745 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-config-data\") pod \"afe8c746-882a-4160-b840-a00f2a2f267c\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.312768 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-scripts\") pod \"afe8c746-882a-4160-b840-a00f2a2f267c\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.312956 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-combined-ca-bundle\") pod \"afe8c746-882a-4160-b840-a00f2a2f267c\" (UID: \"afe8c746-882a-4160-b840-a00f2a2f267c\") " Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.318338 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe8c746-882a-4160-b840-a00f2a2f267c-kube-api-access-bfptt" (OuterVolumeSpecName: "kube-api-access-bfptt") pod "afe8c746-882a-4160-b840-a00f2a2f267c" (UID: "afe8c746-882a-4160-b840-a00f2a2f267c"). InnerVolumeSpecName "kube-api-access-bfptt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.323108 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-scripts" (OuterVolumeSpecName: "scripts") pod "afe8c746-882a-4160-b840-a00f2a2f267c" (UID: "afe8c746-882a-4160-b840-a00f2a2f267c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.343720 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe8c746-882a-4160-b840-a00f2a2f267c" (UID: "afe8c746-882a-4160-b840-a00f2a2f267c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.345729 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-config-data" (OuterVolumeSpecName: "config-data") pod "afe8c746-882a-4160-b840-a00f2a2f267c" (UID: "afe8c746-882a-4160-b840-a00f2a2f267c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.414850 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.414880 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfptt\" (UniqueName: \"kubernetes.io/projected/afe8c746-882a-4160-b840-a00f2a2f267c-kube-api-access-bfptt\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.414892 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.414901 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8c746-882a-4160-b840-a00f2a2f267c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.713803 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:32 crc kubenswrapper[4861]: W0219 13:32:32.724115 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda848cb3_a17d_465d_801f_b1bd1a0aabb5.slice/crio-96ce2eea4a2e63e087a5a3fdbcb879abcd6a1a869caa24598ce1fce44be47813 WatchSource:0}: Error finding container 96ce2eea4a2e63e087a5a3fdbcb879abcd6a1a869caa24598ce1fce44be47813: Status 404 returned error can't find the container with id 96ce2eea4a2e63e087a5a3fdbcb879abcd6a1a869caa24598ce1fce44be47813 Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.897011 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" event={"ID":"95cf9059-55e3-41b0-8ff3-5cd158fcd643","Type":"ContainerDied","Data":"bcd6616506ae055b118485b1da843b97c899b6de8d0a865d7015e84b2cc6deb8"} Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.897151 4861 scope.go:117] "RemoveContainer" containerID="a094783e5a900406f74c352eec8e844c71c0716d149a980ae2f21d6759c518a9" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.897565 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-gz67q" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.905179 4861 generic.go:334] "Generic (PLEG): container finished" podID="acfb5225-4c50-4d7e-8008-50f204360fab" containerID="6f4c022a28f4b0d2812661cbfc568c01f36a11f0b62808c2873c2b8399988d39" exitCode=0 Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.905219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"acfb5225-4c50-4d7e-8008-50f204360fab","Type":"ContainerDied","Data":"6f4c022a28f4b0d2812661cbfc568c01f36a11f0b62808c2873c2b8399988d39"} Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.907327 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" event={"ID":"afe8c746-882a-4160-b840-a00f2a2f267c","Type":"ContainerDied","Data":"24a47fc4d2dd7145d598ded463c8c325b16b568861eea52ca26e0a5263c49e1b"} Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.907351 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24a47fc4d2dd7145d598ded463c8c325b16b568861eea52ca26e0a5263c49e1b" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.907396 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsd6v" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.910377 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da848cb3-a17d-465d-801f-b1bd1a0aabb5","Type":"ContainerStarted","Data":"b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b"} Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.910465 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da848cb3-a17d-465d-801f-b1bd1a0aabb5","Type":"ContainerStarted","Data":"96ce2eea4a2e63e087a5a3fdbcb879abcd6a1a869caa24598ce1fce44be47813"} Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.927305 4861 scope.go:117] "RemoveContainer" containerID="c59f9b01d824664eda29c257ceee5d17f89433d820bd0b0553e1c744bac8b8e6" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.950719 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:32:32 crc kubenswrapper[4861]: E0219 13:32:32.951244 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe8c746-882a-4160-b840-a00f2a2f267c" containerName="nova-cell1-conductor-db-sync" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.951280 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe8c746-882a-4160-b840-a00f2a2f267c" containerName="nova-cell1-conductor-db-sync" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.951593 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe8c746-882a-4160-b840-a00f2a2f267c" containerName="nova-cell1-conductor-db-sync" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.952346 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.956503 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.959652 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.982148 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-gz67q"] Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.982993 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:32:32 crc kubenswrapper[4861]: I0219 13:32:32.989979 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-gz67q"] Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.127947 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5n5\" (UniqueName: \"kubernetes.io/projected/acfb5225-4c50-4d7e-8008-50f204360fab-kube-api-access-9n5n5\") pod \"acfb5225-4c50-4d7e-8008-50f204360fab\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.128403 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-config-data\") pod \"acfb5225-4c50-4d7e-8008-50f204360fab\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.128496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-combined-ca-bundle\") pod \"acfb5225-4c50-4d7e-8008-50f204360fab\" (UID: \"acfb5225-4c50-4d7e-8008-50f204360fab\") " Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.128725 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-998lv\" (UniqueName: \"kubernetes.io/projected/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-kube-api-access-998lv\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.129644 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.129934 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.132893 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfb5225-4c50-4d7e-8008-50f204360fab-kube-api-access-9n5n5" (OuterVolumeSpecName: "kube-api-access-9n5n5") pod "acfb5225-4c50-4d7e-8008-50f204360fab" (UID: "acfb5225-4c50-4d7e-8008-50f204360fab"). InnerVolumeSpecName "kube-api-access-9n5n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.160878 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acfb5225-4c50-4d7e-8008-50f204360fab" (UID: "acfb5225-4c50-4d7e-8008-50f204360fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.174364 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-config-data" (OuterVolumeSpecName: "config-data") pod "acfb5225-4c50-4d7e-8008-50f204360fab" (UID: "acfb5225-4c50-4d7e-8008-50f204360fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.232594 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-998lv\" (UniqueName: \"kubernetes.io/projected/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-kube-api-access-998lv\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.232776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.232907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.233018 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.233048 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb5225-4c50-4d7e-8008-50f204360fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.233070 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5n5\" (UniqueName: \"kubernetes.io/projected/acfb5225-4c50-4d7e-8008-50f204360fab-kube-api-access-9n5n5\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.236109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.236909 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.249266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-998lv\" (UniqueName: \"kubernetes.io/projected/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-kube-api-access-998lv\") pod \"nova-cell1-conductor-0\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.280603 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.788506 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:32:33 crc kubenswrapper[4861]: W0219 13:32:33.804608 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27cfe279_5bf2_4ea6_9fb3_cf1fcb1f8245.slice/crio-e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b WatchSource:0}: Error finding container e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b: Status 404 returned error can't find the container with id e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.924980 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da848cb3-a17d-465d-801f-b1bd1a0aabb5","Type":"ContainerStarted","Data":"3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b"} Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.929739 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245","Type":"ContainerStarted","Data":"e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b"} Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.934252 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"acfb5225-4c50-4d7e-8008-50f204360fab","Type":"ContainerDied","Data":"3185fc47539260ec1148630c1dc7a6bda1292282310cf6d1b56ea132878008e8"} Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.934284 4861 scope.go:117] "RemoveContainer" containerID="6f4c022a28f4b0d2812661cbfc568c01f36a11f0b62808c2873c2b8399988d39" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.934329 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.961503 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.961479002 podStartE2EDuration="2.961479002s" podCreationTimestamp="2026-02-19 13:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:33.947549046 +0000 UTC m=+1368.608652274" watchObservedRunningTime="2026-02-19 13:32:33.961479002 +0000 UTC m=+1368.622582240" Feb 19 13:32:33 crc kubenswrapper[4861]: I0219 13:32:33.997233 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cf9059-55e3-41b0-8ff3-5cd158fcd643" path="/var/lib/kubelet/pods/95cf9059-55e3-41b0-8ff3-5cd158fcd643/volumes" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.003593 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.015714 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.021088 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:34 crc kubenswrapper[4861]: E0219 13:32:34.021651 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfb5225-4c50-4d7e-8008-50f204360fab" containerName="nova-scheduler-scheduler" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.021677 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfb5225-4c50-4d7e-8008-50f204360fab" containerName="nova-scheduler-scheduler" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.021929 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfb5225-4c50-4d7e-8008-50f204360fab" containerName="nova-scheduler-scheduler" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.022686 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.024780 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.029102 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.156208 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.156308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-config-data\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.156387 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfxh\" (UniqueName: \"kubernetes.io/projected/4e510220-b836-49e8-a959-f3597f7bca70-kube-api-access-2pfxh\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.258541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.258641 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-config-data\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.258717 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfxh\" (UniqueName: \"kubernetes.io/projected/4e510220-b836-49e8-a959-f3597f7bca70-kube-api-access-2pfxh\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.264110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.264651 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-config-data\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.275111 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfxh\" (UniqueName: \"kubernetes.io/projected/4e510220-b836-49e8-a959-f3597f7bca70-kube-api-access-2pfxh\") pod \"nova-scheduler-0\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.351916 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.833164 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.945258 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245","Type":"ContainerStarted","Data":"571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f"} Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.945601 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.950059 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e510220-b836-49e8-a959-f3597f7bca70","Type":"ContainerStarted","Data":"ead84061f96b51b14e143af41372b6f67221c7922586db67987dce20c8d2e157"} Feb 19 13:32:34 crc kubenswrapper[4861]: I0219 13:32:34.964934 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.964916944 podStartE2EDuration="2.964916944s" podCreationTimestamp="2026-02-19 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:34.962256212 +0000 UTC m=+1369.623359440" watchObservedRunningTime="2026-02-19 13:32:34.964916944 +0000 UTC m=+1369.626020172" Feb 19 13:32:35 crc kubenswrapper[4861]: I0219 13:32:35.967091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e510220-b836-49e8-a959-f3597f7bca70","Type":"ContainerStarted","Data":"f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682"} Feb 19 13:32:35 crc kubenswrapper[4861]: I0219 13:32:35.993445 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfb5225-4c50-4d7e-8008-50f204360fab" path="/var/lib/kubelet/pods/acfb5225-4c50-4d7e-8008-50f204360fab/volumes" Feb 19 13:32:35 crc kubenswrapper[4861]: I0219 13:32:35.993506 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.993477753 podStartE2EDuration="2.993477753s" podCreationTimestamp="2026-02-19 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:35.985321933 +0000 UTC m=+1370.646425161" watchObservedRunningTime="2026-02-19 13:32:35.993477753 +0000 UTC m=+1370.654581011" Feb 19 13:32:38 crc kubenswrapper[4861]: E0219 13:32:38.105453 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice/crio-f456e9cbe8ce9bfb9aedf4df9a26b3a3c2600950bcf49d4ed5994dc477702e04\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411cd56f_4fb3_4f9b_9cfe_e287f22a4609.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d690b0_57b6_4544_9181_32144adaaef5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0543bf80_4d09_4c45_897d_3b2ae4291861.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b85ee6_f9f6_4f1e_8fc9_23072e437a14.slice\": RecentStats: unable to find data in memory cache]" Feb 19 13:32:39 crc kubenswrapper[4861]: I0219 13:32:39.353067 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 13:32:42 crc kubenswrapper[4861]: I0219 13:32:42.271026 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:32:42 crc kubenswrapper[4861]: I0219 13:32:42.271524 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:32:43 crc kubenswrapper[4861]: I0219 13:32:43.323315 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 13:32:43 crc kubenswrapper[4861]: I0219 13:32:43.353668 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:32:43 crc kubenswrapper[4861]: I0219 13:32:43.354160 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:32:44 crc kubenswrapper[4861]: I0219 13:32:44.265651 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 13:32:44 crc kubenswrapper[4861]: I0219 13:32:44.352634 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 13:32:44 crc kubenswrapper[4861]: I0219 13:32:44.408190 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 13:32:45 crc kubenswrapper[4861]: I0219 13:32:45.143565 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 13:32:52 crc kubenswrapper[4861]: I0219 13:32:52.279671 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:32:52 crc kubenswrapper[4861]: I0219 13:32:52.280810 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:32:52 crc kubenswrapper[4861]: I0219 13:32:52.282725 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:32:52 crc kubenswrapper[4861]: I0219 13:32:52.286780 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.196078 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.200529 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.413492 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-znr64"] Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.414977 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.426879 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-znr64"] Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.492354 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.492414 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpj2s\" (UniqueName: \"kubernetes.io/projected/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-kube-api-access-cpj2s\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.492472 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.492548 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-config\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.492582 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.492639 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.594791 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.594839 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpj2s\" (UniqueName: \"kubernetes.io/projected/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-kube-api-access-cpj2s\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.594865 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.594911 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-config\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.594933 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.594968 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.595784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.596378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.597446 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.597958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-config\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.598525 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.616843 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpj2s\" (UniqueName: \"kubernetes.io/projected/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-kube-api-access-cpj2s\") pod \"dnsmasq-dns-58f6456c9f-znr64\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:53 crc kubenswrapper[4861]: I0219 13:32:53.735576 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:54 crc kubenswrapper[4861]: I0219 13:32:54.238691 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-znr64"] Feb 19 13:32:54 crc kubenswrapper[4861]: W0219 13:32:54.242356 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8feb0ac_449f_4cd5_9cb9_f497fb09f8e0.slice/crio-788d4b4f82b0a608949e40ba653f9e5543d3c723af89e22de1431d976fe0ac11 WatchSource:0}: Error finding container 788d4b4f82b0a608949e40ba653f9e5543d3c723af89e22de1431d976fe0ac11: Status 404 returned error can't find the container with id 788d4b4f82b0a608949e40ba653f9e5543d3c723af89e22de1431d976fe0ac11 Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.215121 4861 generic.go:334] "Generic (PLEG): container finished" podID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerID="2d9219abd0b0183e6794131291b2740df409ae7257dd6d6efe7affed7f803e8b" exitCode=0 Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.215284 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" event={"ID":"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0","Type":"ContainerDied","Data":"2d9219abd0b0183e6794131291b2740df409ae7257dd6d6efe7affed7f803e8b"} Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.215604 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" event={"ID":"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0","Type":"ContainerStarted","Data":"788d4b4f82b0a608949e40ba653f9e5543d3c723af89e22de1431d976fe0ac11"} Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.368409 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.369208 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-central-agent" containerID="cri-o://a40fb6522e9c85eedfa2a5f0a8f67014e57718cb9119d3ddf75d154b4c8ff4df" gracePeriod=30 Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.369281 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="sg-core" containerID="cri-o://cdfdd5e58b281aa0060b79241e3032905f1974b12b62735a53a6ec38a4775fac" gracePeriod=30 Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.369358 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="proxy-httpd" containerID="cri-o://d37e0f1410c63068592aa1a81ee85989589e22db3e41306607b8fa7828882334" gracePeriod=30 Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.369402 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-notification-agent" containerID="cri-o://3cd6eda07ebc0e699a84741425f20a433109b5b8fdf987994b6da72dda475590" gracePeriod=30 Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.624478 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.826346 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n6g4t"] Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.858161 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6g4t"] Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.858280 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.944268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-catalog-content\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.944341 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-utilities\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:55 crc kubenswrapper[4861]: I0219 13:32:55.944921 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f278f\" (UniqueName: \"kubernetes.io/projected/aea8b77b-5275-4d64-b75c-f3ea30035661-kube-api-access-f278f\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.047589 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f278f\" (UniqueName: \"kubernetes.io/projected/aea8b77b-5275-4d64-b75c-f3ea30035661-kube-api-access-f278f\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.047805 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-catalog-content\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.047849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-utilities\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.050193 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-utilities\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.051328 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-catalog-content\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.098326 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f278f\" (UniqueName: \"kubernetes.io/projected/aea8b77b-5275-4d64-b75c-f3ea30035661-kube-api-access-f278f\") pod \"redhat-operators-n6g4t\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.227295 4861 generic.go:334] "Generic (PLEG): container finished" podID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerID="1962e3d41ea810aad3ab4a831253aac921b27407d79269bdb28287cd3a3df113" exitCode=137 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.227364 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51868560-2fb1-43dc-bdb0-0ebf4b4f173e","Type":"ContainerDied","Data":"1962e3d41ea810aad3ab4a831253aac921b27407d79269bdb28287cd3a3df113"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.227410 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51868560-2fb1-43dc-bdb0-0ebf4b4f173e","Type":"ContainerDied","Data":"ec5df1a42fd97612067b47c6a75b0cb45636ce844ed5b397f8509624bec77ab9"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.227433 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5df1a42fd97612067b47c6a75b0cb45636ce844ed5b397f8509624bec77ab9" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.231796 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" event={"ID":"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0","Type":"ContainerStarted","Data":"c09621d9ce982c78679bd413fb7cb9573fc6a8affe3cae2b67c7d40d0b7c5cd2"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.232849 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.237250 4861 generic.go:334] "Generic (PLEG): container finished" podID="505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" containerID="d62a07316db7ac318a2b912a443e0afd62e0ac6270c9486dac904b25715ae4d5" exitCode=137 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.237299 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d","Type":"ContainerDied","Data":"d62a07316db7ac318a2b912a443e0afd62e0ac6270c9486dac904b25715ae4d5"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240646 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerID="d37e0f1410c63068592aa1a81ee85989589e22db3e41306607b8fa7828882334" exitCode=0 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240683 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerID="cdfdd5e58b281aa0060b79241e3032905f1974b12b62735a53a6ec38a4775fac" exitCode=2 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240694 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerID="a40fb6522e9c85eedfa2a5f0a8f67014e57718cb9119d3ddf75d154b4c8ff4df" exitCode=0 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240711 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerDied","Data":"d37e0f1410c63068592aa1a81ee85989589e22db3e41306607b8fa7828882334"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240736 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerDied","Data":"cdfdd5e58b281aa0060b79241e3032905f1974b12b62735a53a6ec38a4775fac"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240749 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerDied","Data":"a40fb6522e9c85eedfa2a5f0a8f67014e57718cb9119d3ddf75d154b4c8ff4df"} Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.240908 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-log" containerID="cri-o://b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b" gracePeriod=30 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.241036 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-api" containerID="cri-o://3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b" gracePeriod=30 Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.253931 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" podStartSLOduration=3.253913968 podStartE2EDuration="3.253913968s" podCreationTimestamp="2026-02-19 13:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:56.253817876 +0000 UTC m=+1390.914921114" watchObservedRunningTime="2026-02-19 13:32:56.253913968 +0000 UTC m=+1390.915017186" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.308544 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.317468 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.464776 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb75w\" (UniqueName: \"kubernetes.io/projected/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-kube-api-access-vb75w\") pod \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.464900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk96v\" (UniqueName: \"kubernetes.io/projected/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-kube-api-access-fk96v\") pod \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.464942 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-config-data\") pod \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.464979 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-logs\") pod \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.465019 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-combined-ca-bundle\") pod \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\" (UID: \"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.465059 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-combined-ca-bundle\") pod \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.465099 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-config-data\") pod \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\" (UID: \"51868560-2fb1-43dc-bdb0-0ebf4b4f173e\") " Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.479364 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-kube-api-access-vb75w" (OuterVolumeSpecName: "kube-api-access-vb75w") pod "51868560-2fb1-43dc-bdb0-0ebf4b4f173e" (UID: "51868560-2fb1-43dc-bdb0-0ebf4b4f173e"). InnerVolumeSpecName "kube-api-access-vb75w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.479632 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-logs" (OuterVolumeSpecName: "logs") pod "51868560-2fb1-43dc-bdb0-0ebf4b4f173e" (UID: "51868560-2fb1-43dc-bdb0-0ebf4b4f173e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.483748 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-kube-api-access-fk96v" (OuterVolumeSpecName: "kube-api-access-fk96v") pod "505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" (UID: "505721c7-99f1-42f1-9c9c-3ddf8eab6e3d"). InnerVolumeSpecName "kube-api-access-fk96v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.553725 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" (UID: "505721c7-99f1-42f1-9c9c-3ddf8eab6e3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.555545 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-config-data" (OuterVolumeSpecName: "config-data") pod "51868560-2fb1-43dc-bdb0-0ebf4b4f173e" (UID: "51868560-2fb1-43dc-bdb0-0ebf4b4f173e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.561415 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51868560-2fb1-43dc-bdb0-0ebf4b4f173e" (UID: "51868560-2fb1-43dc-bdb0-0ebf4b4f173e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.566893 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk96v\" (UniqueName: \"kubernetes.io/projected/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-kube-api-access-fk96v\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.566932 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.566943 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.566953 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.566962 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.566972 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb75w\" (UniqueName: \"kubernetes.io/projected/51868560-2fb1-43dc-bdb0-0ebf4b4f173e-kube-api-access-vb75w\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.571229 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-config-data" (OuterVolumeSpecName: "config-data") pod "505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" (UID: "505721c7-99f1-42f1-9c9c-3ddf8eab6e3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.668443 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:56 crc kubenswrapper[4861]: W0219 13:32:56.894721 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaea8b77b_5275_4d64_b75c_f3ea30035661.slice/crio-a3c62bf38b8d6a232826fe1323b5c715e404eb67d8788afb940ee964ad4b0feb WatchSource:0}: Error finding container a3c62bf38b8d6a232826fe1323b5c715e404eb67d8788afb940ee964ad4b0feb: Status 404 returned error can't find the container with id a3c62bf38b8d6a232826fe1323b5c715e404eb67d8788afb940ee964ad4b0feb Feb 19 13:32:56 crc kubenswrapper[4861]: I0219 13:32:56.896014 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6g4t"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.252760 4861 generic.go:334] "Generic (PLEG): container finished" podID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerID="cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b" exitCode=0 Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.252866 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerDied","Data":"cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b"} Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.253103 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerStarted","Data":"a3c62bf38b8d6a232826fe1323b5c715e404eb67d8788afb940ee964ad4b0feb"} Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.255499 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.255622 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.255612 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"505721c7-99f1-42f1-9c9c-3ddf8eab6e3d","Type":"ContainerDied","Data":"f12226cf719945f56c3a6262acfd18a5912a92502017058c06521a357d0d8ea6"} Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.255805 4861 scope.go:117] "RemoveContainer" containerID="d62a07316db7ac318a2b912a443e0afd62e0ac6270c9486dac904b25715ae4d5" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.258930 4861 generic.go:334] "Generic (PLEG): container finished" podID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerID="b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b" exitCode=143 Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.259007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da848cb3-a17d-465d-801f-b1bd1a0aabb5","Type":"ContainerDied","Data":"b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b"} Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.259022 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.322944 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.341701 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.375199 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.384025 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.404213 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: E0219 13:32:57.404737 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-metadata" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.404758 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-metadata" Feb 19 13:32:57 crc kubenswrapper[4861]: E0219 13:32:57.404791 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-log" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.404799 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-log" Feb 19 13:32:57 crc kubenswrapper[4861]: E0219 13:32:57.404829 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.404838 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.405042 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-metadata" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.405071 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.405086 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" containerName="nova-metadata-log" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.406388 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.408917 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.413775 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.418484 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.428805 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.430033 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.432801 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.433062 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.433217 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.439638 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.586930 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.586981 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdnt\" (UniqueName: \"kubernetes.io/projected/54334d55-b01e-4310-a442-08b922e35f7c-kube-api-access-gjdnt\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587013 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587158 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-config-data\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587250 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54334d55-b01e-4310-a442-08b922e35f7c-logs\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587332 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqvd\" (UniqueName: \"kubernetes.io/projected/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-kube-api-access-slqvd\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587504 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587600 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.587698 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689688 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689736 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689824 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689884 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdnt\" (UniqueName: \"kubernetes.io/projected/54334d55-b01e-4310-a442-08b922e35f7c-kube-api-access-gjdnt\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689906 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689930 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-config-data\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689953 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54334d55-b01e-4310-a442-08b922e35f7c-logs\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.689981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqvd\" (UniqueName: \"kubernetes.io/projected/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-kube-api-access-slqvd\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.697239 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.698930 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54334d55-b01e-4310-a442-08b922e35f7c-logs\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.699726 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.699971 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.700491 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.701411 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.705570 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-config-data\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.706214 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.722682 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdnt\" (UniqueName: \"kubernetes.io/projected/54334d55-b01e-4310-a442-08b922e35f7c-kube-api-access-gjdnt\") pod \"nova-metadata-0\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " pod="openstack/nova-metadata-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.726972 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqvd\" (UniqueName: \"kubernetes.io/projected/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-kube-api-access-slqvd\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:57 crc kubenswrapper[4861]: I0219 13:32:57.748072 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.008246 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505721c7-99f1-42f1-9c9c-3ddf8eab6e3d" path="/var/lib/kubelet/pods/505721c7-99f1-42f1-9c9c-3ddf8eab6e3d/volumes" Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.009255 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51868560-2fb1-43dc-bdb0-0ebf4b4f173e" path="/var/lib/kubelet/pods/51868560-2fb1-43dc-bdb0-0ebf4b4f173e/volumes" Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.022583 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.216791 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:32:58 crc kubenswrapper[4861]: W0219 13:32:58.225617 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f0bdae2_ebf0_4f9d_a9af_d1b40f8d5532.slice/crio-96991be9cc29d3f2467ec29101612fcc2720daf6748a4aa099802361c1822b8b WatchSource:0}: Error finding container 96991be9cc29d3f2467ec29101612fcc2720daf6748a4aa099802361c1822b8b: Status 404 returned error can't find the container with id 96991be9cc29d3f2467ec29101612fcc2720daf6748a4aa099802361c1822b8b Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.271892 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532","Type":"ContainerStarted","Data":"96991be9cc29d3f2467ec29101612fcc2720daf6748a4aa099802361c1822b8b"} Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.283120 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerStarted","Data":"a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3"} Feb 19 13:32:58 crc kubenswrapper[4861]: I0219 13:32:58.512046 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.305392 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54334d55-b01e-4310-a442-08b922e35f7c","Type":"ContainerStarted","Data":"d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a"} Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.305795 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54334d55-b01e-4310-a442-08b922e35f7c","Type":"ContainerStarted","Data":"59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0"} Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.305840 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54334d55-b01e-4310-a442-08b922e35f7c","Type":"ContainerStarted","Data":"aa499393ad247d4e6a861e07c06d4d590593a8bff4090765e7ab5dc259627d41"} Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.313235 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerID="3cd6eda07ebc0e699a84741425f20a433109b5b8fdf987994b6da72dda475590" exitCode=0 Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.313288 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerDied","Data":"3cd6eda07ebc0e699a84741425f20a433109b5b8fdf987994b6da72dda475590"} Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.317394 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532","Type":"ContainerStarted","Data":"531ccdafb0d0612a0c1e667d57b0657cf8216937bc4a152030d92ea096cbe11a"} Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.347511 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3474863790000002 podStartE2EDuration="2.347486379s" podCreationTimestamp="2026-02-19 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:59.334956402 +0000 UTC m=+1393.996059640" watchObservedRunningTime="2026-02-19 13:32:59.347486379 +0000 UTC m=+1394.008589617" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.371227 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.371200928 podStartE2EDuration="2.371200928s" podCreationTimestamp="2026-02-19 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:32:59.36053307 +0000 UTC m=+1394.021636318" watchObservedRunningTime="2026-02-19 13:32:59.371200928 +0000 UTC m=+1394.032304166" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.688682 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.851870 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-log-httpd\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.851985 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-scripts\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852012 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-ceilometer-tls-certs\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852072 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-combined-ca-bundle\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852124 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm72m\" (UniqueName: \"kubernetes.io/projected/e2e04a54-7f65-46b4-9e5c-97e6d553064b-kube-api-access-qm72m\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852150 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-sg-core-conf-yaml\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852238 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852296 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-run-httpd\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852546 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-config-data\") pod \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\" (UID: \"e2e04a54-7f65-46b4-9e5c-97e6d553064b\") " Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.852920 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.854591 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.873109 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e04a54-7f65-46b4-9e5c-97e6d553064b-kube-api-access-qm72m" (OuterVolumeSpecName: "kube-api-access-qm72m") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "kube-api-access-qm72m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.877732 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-scripts" (OuterVolumeSpecName: "scripts") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.885109 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.910539 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.955638 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e04a54-7f65-46b4-9e5c-97e6d553064b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.955903 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.955973 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.956038 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm72m\" (UniqueName: \"kubernetes.io/projected/e2e04a54-7f65-46b4-9e5c-97e6d553064b-kube-api-access-qm72m\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.956094 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.972269 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-config-data" (OuterVolumeSpecName: "config-data") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:32:59 crc kubenswrapper[4861]: I0219 13:32:59.981754 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2e04a54-7f65-46b4-9e5c-97e6d553064b" (UID: "e2e04a54-7f65-46b4-9e5c-97e6d553064b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.062644 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.062676 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e04a54-7f65-46b4-9e5c-97e6d553064b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.247716 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.330727 4861 generic.go:334] "Generic (PLEG): container finished" podID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerID="3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b" exitCode=0 Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.330796 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da848cb3-a17d-465d-801f-b1bd1a0aabb5","Type":"ContainerDied","Data":"3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b"} Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.330826 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da848cb3-a17d-465d-801f-b1bd1a0aabb5","Type":"ContainerDied","Data":"96ce2eea4a2e63e087a5a3fdbcb879abcd6a1a869caa24598ce1fce44be47813"} Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.330845 4861 scope.go:117] "RemoveContainer" containerID="3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.330978 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.337024 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2e04a54-7f65-46b4-9e5c-97e6d553064b","Type":"ContainerDied","Data":"00c4e1410e31416525418b8f499f41013714761b0290eae6e13671d13a09424a"} Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.337093 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.340353 4861 generic.go:334] "Generic (PLEG): container finished" podID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerID="a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3" exitCode=0 Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.341236 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerDied","Data":"a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3"} Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.361963 4861 scope.go:117] "RemoveContainer" containerID="b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.369517 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-combined-ca-bundle\") pod \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.369582 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drt9p\" (UniqueName: \"kubernetes.io/projected/da848cb3-a17d-465d-801f-b1bd1a0aabb5-kube-api-access-drt9p\") pod \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.369630 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-config-data\") pod \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.369738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da848cb3-a17d-465d-801f-b1bd1a0aabb5-logs\") pod \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\" (UID: \"da848cb3-a17d-465d-801f-b1bd1a0aabb5\") " Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.370536 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da848cb3-a17d-465d-801f-b1bd1a0aabb5-logs" (OuterVolumeSpecName: "logs") pod "da848cb3-a17d-465d-801f-b1bd1a0aabb5" (UID: "da848cb3-a17d-465d-801f-b1bd1a0aabb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.370933 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da848cb3-a17d-465d-801f-b1bd1a0aabb5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.385575 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da848cb3-a17d-465d-801f-b1bd1a0aabb5-kube-api-access-drt9p" (OuterVolumeSpecName: "kube-api-access-drt9p") pod "da848cb3-a17d-465d-801f-b1bd1a0aabb5" (UID: "da848cb3-a17d-465d-801f-b1bd1a0aabb5"). InnerVolumeSpecName "kube-api-access-drt9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.388412 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.399560 4861 scope.go:117] "RemoveContainer" containerID="3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.402662 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b\": container with ID starting with 3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b not found: ID does not exist" containerID="3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.402715 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b"} err="failed to get container status \"3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b\": rpc error: code = NotFound desc = could not find container \"3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b\": container with ID starting with 3373bef681cf8699a88426032318bb418228f79ef9a8a5f01741385b0ead430b not found: ID does not exist" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.402743 4861 scope.go:117] "RemoveContainer" containerID="b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.403235 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b\": container with ID starting with b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b not found: ID does not exist" containerID="b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.403260 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b"} err="failed to get container status \"b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b\": rpc error: code = NotFound desc = could not find container \"b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b\": container with ID starting with b30e3af4ead3273a64705a09521c9c3b133110a497bc2c12a616416e8764d69b not found: ID does not exist" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.403274 4861 scope.go:117] "RemoveContainer" containerID="d37e0f1410c63068592aa1a81ee85989589e22db3e41306607b8fa7828882334" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.408673 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-config-data" (OuterVolumeSpecName: "config-data") pod "da848cb3-a17d-465d-801f-b1bd1a0aabb5" (UID: "da848cb3-a17d-465d-801f-b1bd1a0aabb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.420935 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.430304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da848cb3-a17d-465d-801f-b1bd1a0aabb5" (UID: "da848cb3-a17d-465d-801f-b1bd1a0aabb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.438163 4861 scope.go:117] "RemoveContainer" containerID="cdfdd5e58b281aa0060b79241e3032905f1974b12b62735a53a6ec38a4775fac" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.452697 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.453169 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-notification-agent" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453192 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-notification-agent" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.453207 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="sg-core" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453215 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="sg-core" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.453235 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-central-agent" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453242 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-central-agent" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.453260 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-log" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453265 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-log" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.453273 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-api" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453278 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-api" Feb 19 13:33:00 crc kubenswrapper[4861]: E0219 13:33:00.453291 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="proxy-httpd" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453298 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="proxy-httpd" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453540 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-log" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453553 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-notification-agent" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453566 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="sg-core" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453577 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="ceilometer-central-agent" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453588 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" containerName="proxy-httpd" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.453598 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" containerName="nova-api-api" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.455257 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.457541 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.460077 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.460229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.463936 4861 scope.go:117] "RemoveContainer" containerID="3cd6eda07ebc0e699a84741425f20a433109b5b8fdf987994b6da72dda475590" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.468565 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.472592 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.474623 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drt9p\" (UniqueName: \"kubernetes.io/projected/da848cb3-a17d-465d-801f-b1bd1a0aabb5-kube-api-access-drt9p\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.474712 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da848cb3-a17d-465d-801f-b1bd1a0aabb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.490680 4861 scope.go:117] "RemoveContainer" containerID="a40fb6522e9c85eedfa2a5f0a8f67014e57718cb9119d3ddf75d154b4c8ff4df" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576297 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-scripts\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576437 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-run-httpd\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576669 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-log-httpd\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576764 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2vf\" (UniqueName: \"kubernetes.io/projected/11e264a8-32df-4980-a6b8-eb1964d644b9-kube-api-access-nt2vf\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576819 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.576864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-config-data\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.664836 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682270 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-log-httpd\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682339 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2vf\" (UniqueName: \"kubernetes.io/projected/11e264a8-32df-4980-a6b8-eb1964d644b9-kube-api-access-nt2vf\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682400 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682456 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-config-data\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682476 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682539 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-scripts\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-run-httpd\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.682639 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.685389 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-log-httpd\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.685952 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-run-httpd\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.687912 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.689060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-scripts\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.690101 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.690435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.694393 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-config-data\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.718071 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.720446 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.721127 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2vf\" (UniqueName: \"kubernetes.io/projected/11e264a8-32df-4980-a6b8-eb1964d644b9-kube-api-access-nt2vf\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.722048 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.722671 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.723858 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.723867 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.725651 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.781500 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.887074 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54407f31-965d-4510-a875-e96ef076ec7a-logs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.887443 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-config-data\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.887523 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.887549 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnbp\" (UniqueName: \"kubernetes.io/projected/54407f31-965d-4510-a875-e96ef076ec7a-kube-api-access-wwnbp\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.887591 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.887630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-public-tls-certs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.989595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.989645 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnbp\" (UniqueName: \"kubernetes.io/projected/54407f31-965d-4510-a875-e96ef076ec7a-kube-api-access-wwnbp\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.989696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.989733 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-public-tls-certs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.989853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54407f31-965d-4510-a875-e96ef076ec7a-logs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.989889 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-config-data\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.991277 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54407f31-965d-4510-a875-e96ef076ec7a-logs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.994634 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-public-tls-certs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.994825 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:00 crc kubenswrapper[4861]: I0219 13:33:00.997560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.008190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnbp\" (UniqueName: \"kubernetes.io/projected/54407f31-965d-4510-a875-e96ef076ec7a-kube-api-access-wwnbp\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.008628 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-config-data\") pod \"nova-api-0\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " pod="openstack/nova-api-0" Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.092471 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.293073 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.352267 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerStarted","Data":"bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd"} Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.358868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerStarted","Data":"ba74838fa8dd8f959bd0f528028aa4f632e59f8de7d472dc75e581aa76bfb1ab"} Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.383384 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n6g4t" podStartSLOduration=2.594391928 podStartE2EDuration="6.383361715s" podCreationTimestamp="2026-02-19 13:32:55 +0000 UTC" firstStartedPulling="2026-02-19 13:32:57.255239344 +0000 UTC m=+1391.916342572" lastFinishedPulling="2026-02-19 13:33:01.044209131 +0000 UTC m=+1395.705312359" observedRunningTime="2026-02-19 13:33:01.372530583 +0000 UTC m=+1396.033633841" watchObservedRunningTime="2026-02-19 13:33:01.383361715 +0000 UTC m=+1396.044464953" Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.578532 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.987243 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da848cb3-a17d-465d-801f-b1bd1a0aabb5" path="/var/lib/kubelet/pods/da848cb3-a17d-465d-801f-b1bd1a0aabb5/volumes" Feb 19 13:33:01 crc kubenswrapper[4861]: I0219 13:33:01.988819 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e04a54-7f65-46b4-9e5c-97e6d553064b" path="/var/lib/kubelet/pods/e2e04a54-7f65-46b4-9e5c-97e6d553064b/volumes" Feb 19 13:33:02 crc kubenswrapper[4861]: I0219 13:33:02.370731 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerStarted","Data":"2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468"} Feb 19 13:33:02 crc kubenswrapper[4861]: I0219 13:33:02.373477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54407f31-965d-4510-a875-e96ef076ec7a","Type":"ContainerStarted","Data":"b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8"} Feb 19 13:33:02 crc kubenswrapper[4861]: I0219 13:33:02.373505 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54407f31-965d-4510-a875-e96ef076ec7a","Type":"ContainerStarted","Data":"b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e"} Feb 19 13:33:02 crc kubenswrapper[4861]: I0219 13:33:02.373519 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54407f31-965d-4510-a875-e96ef076ec7a","Type":"ContainerStarted","Data":"430350e3ade87bbdb2e10169231c187270e223d9a4db4a0395c85554ba6bd4a7"} Feb 19 13:33:02 crc kubenswrapper[4861]: I0219 13:33:02.401623 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.401604227 podStartE2EDuration="2.401604227s" podCreationTimestamp="2026-02-19 13:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:33:02.398887653 +0000 UTC m=+1397.059990891" watchObservedRunningTime="2026-02-19 13:33:02.401604227 +0000 UTC m=+1397.062707455" Feb 19 13:33:02 crc kubenswrapper[4861]: I0219 13:33:02.749295 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:33:03 crc kubenswrapper[4861]: I0219 13:33:03.023287 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:33:03 crc kubenswrapper[4861]: I0219 13:33:03.023403 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:33:03 crc kubenswrapper[4861]: I0219 13:33:03.389160 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerStarted","Data":"9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c"} Feb 19 13:33:03 crc kubenswrapper[4861]: I0219 13:33:03.738669 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:33:03 crc kubenswrapper[4861]: I0219 13:33:03.811885 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-bmrrh"] Feb 19 13:33:03 crc kubenswrapper[4861]: I0219 13:33:03.812135 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" podUID="d937689a-e796-41e2-baeb-b5e29f737093" containerName="dnsmasq-dns" containerID="cri-o://f9b9d3b8ad83221536664dd8553d659f7808141c620c64adeb9bdfac11d98f10" gracePeriod=10 Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.402641 4861 generic.go:334] "Generic (PLEG): container finished" podID="d937689a-e796-41e2-baeb-b5e29f737093" containerID="f9b9d3b8ad83221536664dd8553d659f7808141c620c64adeb9bdfac11d98f10" exitCode=0 Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.402722 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" event={"ID":"d937689a-e796-41e2-baeb-b5e29f737093","Type":"ContainerDied","Data":"f9b9d3b8ad83221536664dd8553d659f7808141c620c64adeb9bdfac11d98f10"} Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.411383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerStarted","Data":"96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84"} Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.827274 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.997710 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-svc\") pod \"d937689a-e796-41e2-baeb-b5e29f737093\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.997842 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-sb\") pod \"d937689a-e796-41e2-baeb-b5e29f737093\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.997872 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-config\") pod \"d937689a-e796-41e2-baeb-b5e29f737093\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.997895 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-swift-storage-0\") pod \"d937689a-e796-41e2-baeb-b5e29f737093\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.997918 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-nb\") pod \"d937689a-e796-41e2-baeb-b5e29f737093\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " Feb 19 13:33:04 crc kubenswrapper[4861]: I0219 13:33:04.997995 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pnnw\" (UniqueName: \"kubernetes.io/projected/d937689a-e796-41e2-baeb-b5e29f737093-kube-api-access-9pnnw\") pod \"d937689a-e796-41e2-baeb-b5e29f737093\" (UID: \"d937689a-e796-41e2-baeb-b5e29f737093\") " Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.003881 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d937689a-e796-41e2-baeb-b5e29f737093-kube-api-access-9pnnw" (OuterVolumeSpecName: "kube-api-access-9pnnw") pod "d937689a-e796-41e2-baeb-b5e29f737093" (UID: "d937689a-e796-41e2-baeb-b5e29f737093"). InnerVolumeSpecName "kube-api-access-9pnnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.070942 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d937689a-e796-41e2-baeb-b5e29f737093" (UID: "d937689a-e796-41e2-baeb-b5e29f737093"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.073823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d937689a-e796-41e2-baeb-b5e29f737093" (UID: "d937689a-e796-41e2-baeb-b5e29f737093"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.076954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d937689a-e796-41e2-baeb-b5e29f737093" (UID: "d937689a-e796-41e2-baeb-b5e29f737093"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.095002 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-config" (OuterVolumeSpecName: "config") pod "d937689a-e796-41e2-baeb-b5e29f737093" (UID: "d937689a-e796-41e2-baeb-b5e29f737093"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.100928 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.100953 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.100969 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.100981 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.100992 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pnnw\" (UniqueName: \"kubernetes.io/projected/d937689a-e796-41e2-baeb-b5e29f737093-kube-api-access-9pnnw\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.103926 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d937689a-e796-41e2-baeb-b5e29f737093" (UID: "d937689a-e796-41e2-baeb-b5e29f737093"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.203158 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d937689a-e796-41e2-baeb-b5e29f737093-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.421190 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerStarted","Data":"3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6"} Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.422438 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.424457 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" event={"ID":"d937689a-e796-41e2-baeb-b5e29f737093","Type":"ContainerDied","Data":"46b943208540f8bf1e1e5963eb54c7bc11d3e7dc20403cdbfdcee17edb28f6ea"} Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.424488 4861 scope.go:117] "RemoveContainer" containerID="f9b9d3b8ad83221536664dd8553d659f7808141c620c64adeb9bdfac11d98f10" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.424570 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-bmrrh" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.450865 4861 scope.go:117] "RemoveContainer" containerID="8cde963501d7b1836dc52d2fac09c21cf31f9d101ad1a755e8110d70cde7bf8f" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.460765 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.032275189 podStartE2EDuration="5.460740209s" podCreationTimestamp="2026-02-19 13:33:00 +0000 UTC" firstStartedPulling="2026-02-19 13:33:01.304062749 +0000 UTC m=+1395.965165977" lastFinishedPulling="2026-02-19 13:33:04.732527779 +0000 UTC m=+1399.393630997" observedRunningTime="2026-02-19 13:33:05.45112175 +0000 UTC m=+1400.112224988" watchObservedRunningTime="2026-02-19 13:33:05.460740209 +0000 UTC m=+1400.121843427" Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.472982 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-bmrrh"] Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.492704 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-bmrrh"] Feb 19 13:33:05 crc kubenswrapper[4861]: I0219 13:33:05.989979 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d937689a-e796-41e2-baeb-b5e29f737093" path="/var/lib/kubelet/pods/d937689a-e796-41e2-baeb-b5e29f737093/volumes" Feb 19 13:33:06 crc kubenswrapper[4861]: I0219 13:33:06.241264 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:33:06 crc kubenswrapper[4861]: I0219 13:33:06.241315 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:33:07 crc kubenswrapper[4861]: I0219 13:33:07.292502 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n6g4t" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="registry-server" probeResult="failure" output=< Feb 19 13:33:07 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 13:33:07 crc kubenswrapper[4861]: > Feb 19 13:33:07 crc kubenswrapper[4861]: I0219 13:33:07.749239 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:33:07 crc kubenswrapper[4861]: I0219 13:33:07.780852 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.024163 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.024241 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.479762 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.830270 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sqhgs"] Feb 19 13:33:08 crc kubenswrapper[4861]: E0219 13:33:08.830915 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d937689a-e796-41e2-baeb-b5e29f737093" containerName="init" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.830932 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d937689a-e796-41e2-baeb-b5e29f737093" containerName="init" Feb 19 13:33:08 crc kubenswrapper[4861]: E0219 13:33:08.830944 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d937689a-e796-41e2-baeb-b5e29f737093" containerName="dnsmasq-dns" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.830951 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d937689a-e796-41e2-baeb-b5e29f737093" containerName="dnsmasq-dns" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.831124 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d937689a-e796-41e2-baeb-b5e29f737093" containerName="dnsmasq-dns" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.832759 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.839206 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.839460 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.844652 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sqhgs"] Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.977071 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-scripts\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.978206 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.978335 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4sr2\" (UniqueName: \"kubernetes.io/projected/67656319-0c92-41c6-a2f5-97ec64fc2ec6-kube-api-access-h4sr2\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:08 crc kubenswrapper[4861]: I0219 13:33:08.978366 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-config-data\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.034699 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.034717 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.080036 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-scripts\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.080157 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.080205 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4sr2\" (UniqueName: \"kubernetes.io/projected/67656319-0c92-41c6-a2f5-97ec64fc2ec6-kube-api-access-h4sr2\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.080225 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-config-data\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.085744 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-scripts\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.087220 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-config-data\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.090149 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.099512 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4sr2\" (UniqueName: \"kubernetes.io/projected/67656319-0c92-41c6-a2f5-97ec64fc2ec6-kube-api-access-h4sr2\") pod \"nova-cell1-cell-mapping-sqhgs\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.155200 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:09 crc kubenswrapper[4861]: W0219 13:33:09.656844 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67656319_0c92_41c6_a2f5_97ec64fc2ec6.slice/crio-5275a16e6f6c724693ce8220477eacbd055a37d86372bacfb33da32ef6cefad7 WatchSource:0}: Error finding container 5275a16e6f6c724693ce8220477eacbd055a37d86372bacfb33da32ef6cefad7: Status 404 returned error can't find the container with id 5275a16e6f6c724693ce8220477eacbd055a37d86372bacfb33da32ef6cefad7 Feb 19 13:33:09 crc kubenswrapper[4861]: I0219 13:33:09.659822 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sqhgs"] Feb 19 13:33:10 crc kubenswrapper[4861]: I0219 13:33:10.490020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sqhgs" event={"ID":"67656319-0c92-41c6-a2f5-97ec64fc2ec6","Type":"ContainerStarted","Data":"54e7903255472247bd9622ba8ff4c16f8318f6b3c691e3d683ae6aab3638869a"} Feb 19 13:33:10 crc kubenswrapper[4861]: I0219 13:33:10.490498 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sqhgs" event={"ID":"67656319-0c92-41c6-a2f5-97ec64fc2ec6","Type":"ContainerStarted","Data":"5275a16e6f6c724693ce8220477eacbd055a37d86372bacfb33da32ef6cefad7"} Feb 19 13:33:11 crc kubenswrapper[4861]: I0219 13:33:11.093869 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:33:11 crc kubenswrapper[4861]: I0219 13:33:11.094230 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:33:12 crc kubenswrapper[4861]: I0219 13:33:12.113592 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:12 crc kubenswrapper[4861]: I0219 13:33:12.113654 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:14 crc kubenswrapper[4861]: I0219 13:33:14.542929 4861 generic.go:334] "Generic (PLEG): container finished" podID="67656319-0c92-41c6-a2f5-97ec64fc2ec6" containerID="54e7903255472247bd9622ba8ff4c16f8318f6b3c691e3d683ae6aab3638869a" exitCode=0 Feb 19 13:33:14 crc kubenswrapper[4861]: I0219 13:33:14.543054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sqhgs" event={"ID":"67656319-0c92-41c6-a2f5-97ec64fc2ec6","Type":"ContainerDied","Data":"54e7903255472247bd9622ba8ff4c16f8318f6b3c691e3d683ae6aab3638869a"} Feb 19 13:33:15 crc kubenswrapper[4861]: I0219 13:33:15.993588 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.168188 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4sr2\" (UniqueName: \"kubernetes.io/projected/67656319-0c92-41c6-a2f5-97ec64fc2ec6-kube-api-access-h4sr2\") pod \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.168306 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-config-data\") pod \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.168344 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-scripts\") pod \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.168375 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-combined-ca-bundle\") pod \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\" (UID: \"67656319-0c92-41c6-a2f5-97ec64fc2ec6\") " Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.178049 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67656319-0c92-41c6-a2f5-97ec64fc2ec6-kube-api-access-h4sr2" (OuterVolumeSpecName: "kube-api-access-h4sr2") pod "67656319-0c92-41c6-a2f5-97ec64fc2ec6" (UID: "67656319-0c92-41c6-a2f5-97ec64fc2ec6"). InnerVolumeSpecName "kube-api-access-h4sr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.178462 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-scripts" (OuterVolumeSpecName: "scripts") pod "67656319-0c92-41c6-a2f5-97ec64fc2ec6" (UID: "67656319-0c92-41c6-a2f5-97ec64fc2ec6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.200492 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-config-data" (OuterVolumeSpecName: "config-data") pod "67656319-0c92-41c6-a2f5-97ec64fc2ec6" (UID: "67656319-0c92-41c6-a2f5-97ec64fc2ec6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.221081 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67656319-0c92-41c6-a2f5-97ec64fc2ec6" (UID: "67656319-0c92-41c6-a2f5-97ec64fc2ec6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.273176 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4sr2\" (UniqueName: \"kubernetes.io/projected/67656319-0c92-41c6-a2f5-97ec64fc2ec6-kube-api-access-h4sr2\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.273233 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.273251 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.273268 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67656319-0c92-41c6-a2f5-97ec64fc2ec6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.306650 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.363742 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.546774 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n6g4t"] Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.568205 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sqhgs" event={"ID":"67656319-0c92-41c6-a2f5-97ec64fc2ec6","Type":"ContainerDied","Data":"5275a16e6f6c724693ce8220477eacbd055a37d86372bacfb33da32ef6cefad7"} Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.568256 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5275a16e6f6c724693ce8220477eacbd055a37d86372bacfb33da32ef6cefad7" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.568218 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sqhgs" Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.785899 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.786982 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-log" containerID="cri-o://b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e" gracePeriod=30 Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.787085 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-api" containerID="cri-o://b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8" gracePeriod=30 Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.864772 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.865667 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-log" containerID="cri-o://59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0" gracePeriod=30 Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.865943 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-metadata" containerID="cri-o://d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a" gracePeriod=30 Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.893606 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:33:16 crc kubenswrapper[4861]: I0219 13:33:16.893840 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4e510220-b836-49e8-a959-f3597f7bca70" containerName="nova-scheduler-scheduler" containerID="cri-o://f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682" gracePeriod=30 Feb 19 13:33:17 crc kubenswrapper[4861]: I0219 13:33:17.588993 4861 generic.go:334] "Generic (PLEG): container finished" podID="54334d55-b01e-4310-a442-08b922e35f7c" containerID="59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0" exitCode=143 Feb 19 13:33:17 crc kubenswrapper[4861]: I0219 13:33:17.589112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54334d55-b01e-4310-a442-08b922e35f7c","Type":"ContainerDied","Data":"59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0"} Feb 19 13:33:17 crc kubenswrapper[4861]: I0219 13:33:17.595311 4861 generic.go:334] "Generic (PLEG): container finished" podID="54407f31-965d-4510-a875-e96ef076ec7a" containerID="b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e" exitCode=143 Feb 19 13:33:17 crc kubenswrapper[4861]: I0219 13:33:17.595600 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54407f31-965d-4510-a875-e96ef076ec7a","Type":"ContainerDied","Data":"b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e"} Feb 19 13:33:17 crc kubenswrapper[4861]: I0219 13:33:17.595676 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n6g4t" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="registry-server" containerID="cri-o://bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd" gracePeriod=2 Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.286635 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.294523 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.414705 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f278f\" (UniqueName: \"kubernetes.io/projected/aea8b77b-5275-4d64-b75c-f3ea30035661-kube-api-access-f278f\") pod \"aea8b77b-5275-4d64-b75c-f3ea30035661\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.414754 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-utilities\") pod \"aea8b77b-5275-4d64-b75c-f3ea30035661\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.414828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-config-data\") pod \"4e510220-b836-49e8-a959-f3597f7bca70\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.414860 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfxh\" (UniqueName: \"kubernetes.io/projected/4e510220-b836-49e8-a959-f3597f7bca70-kube-api-access-2pfxh\") pod \"4e510220-b836-49e8-a959-f3597f7bca70\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.414996 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-catalog-content\") pod \"aea8b77b-5275-4d64-b75c-f3ea30035661\" (UID: \"aea8b77b-5275-4d64-b75c-f3ea30035661\") " Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.415034 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-combined-ca-bundle\") pod \"4e510220-b836-49e8-a959-f3597f7bca70\" (UID: \"4e510220-b836-49e8-a959-f3597f7bca70\") " Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.416003 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-utilities" (OuterVolumeSpecName: "utilities") pod "aea8b77b-5275-4d64-b75c-f3ea30035661" (UID: "aea8b77b-5275-4d64-b75c-f3ea30035661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.425804 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea8b77b-5275-4d64-b75c-f3ea30035661-kube-api-access-f278f" (OuterVolumeSpecName: "kube-api-access-f278f") pod "aea8b77b-5275-4d64-b75c-f3ea30035661" (UID: "aea8b77b-5275-4d64-b75c-f3ea30035661"). InnerVolumeSpecName "kube-api-access-f278f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.425920 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e510220-b836-49e8-a959-f3597f7bca70-kube-api-access-2pfxh" (OuterVolumeSpecName: "kube-api-access-2pfxh") pod "4e510220-b836-49e8-a959-f3597f7bca70" (UID: "4e510220-b836-49e8-a959-f3597f7bca70"). InnerVolumeSpecName "kube-api-access-2pfxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.450461 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e510220-b836-49e8-a959-f3597f7bca70" (UID: "4e510220-b836-49e8-a959-f3597f7bca70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.451833 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-config-data" (OuterVolumeSpecName: "config-data") pod "4e510220-b836-49e8-a959-f3597f7bca70" (UID: "4e510220-b836-49e8-a959-f3597f7bca70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.517874 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.517999 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f278f\" (UniqueName: \"kubernetes.io/projected/aea8b77b-5275-4d64-b75c-f3ea30035661-kube-api-access-f278f\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.518016 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.518026 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e510220-b836-49e8-a959-f3597f7bca70-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.518034 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfxh\" (UniqueName: \"kubernetes.io/projected/4e510220-b836-49e8-a959-f3597f7bca70-kube-api-access-2pfxh\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.545634 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aea8b77b-5275-4d64-b75c-f3ea30035661" (UID: "aea8b77b-5275-4d64-b75c-f3ea30035661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.608397 4861 generic.go:334] "Generic (PLEG): container finished" podID="4e510220-b836-49e8-a959-f3597f7bca70" containerID="f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682" exitCode=0 Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.608510 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e510220-b836-49e8-a959-f3597f7bca70","Type":"ContainerDied","Data":"f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682"} Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.608550 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e510220-b836-49e8-a959-f3597f7bca70","Type":"ContainerDied","Data":"ead84061f96b51b14e143af41372b6f67221c7922586db67987dce20c8d2e157"} Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.608577 4861 scope.go:117] "RemoveContainer" containerID="f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.608739 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.615668 4861 generic.go:334] "Generic (PLEG): container finished" podID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerID="bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd" exitCode=0 Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.615740 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6g4t" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.615748 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerDied","Data":"bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd"} Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.615830 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6g4t" event={"ID":"aea8b77b-5275-4d64-b75c-f3ea30035661","Type":"ContainerDied","Data":"a3c62bf38b8d6a232826fe1323b5c715e404eb67d8788afb940ee964ad4b0feb"} Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.619767 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8b77b-5275-4d64-b75c-f3ea30035661-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.646023 4861 scope.go:117] "RemoveContainer" containerID="f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.646538 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682\": container with ID starting with f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682 not found: ID does not exist" containerID="f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.646586 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682"} err="failed to get container status \"f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682\": rpc error: code = NotFound desc = could not find container \"f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682\": container with ID starting with f707f0eec01b8d1a02a2586cd54e08570ecec1e2786c06fc34861c19c9582682 not found: ID does not exist" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.646640 4861 scope.go:117] "RemoveContainer" containerID="bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.650181 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.659176 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.688486 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.688902 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="extract-utilities" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.688914 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="extract-utilities" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.688928 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="registry-server" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.688934 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="registry-server" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.688941 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="extract-content" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.688948 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="extract-content" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.688958 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67656319-0c92-41c6-a2f5-97ec64fc2ec6" containerName="nova-manage" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.688964 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="67656319-0c92-41c6-a2f5-97ec64fc2ec6" containerName="nova-manage" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.688972 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e510220-b836-49e8-a959-f3597f7bca70" containerName="nova-scheduler-scheduler" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.688978 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e510220-b836-49e8-a959-f3597f7bca70" containerName="nova-scheduler-scheduler" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.689147 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" containerName="registry-server" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.689177 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e510220-b836-49e8-a959-f3597f7bca70" containerName="nova-scheduler-scheduler" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.689194 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="67656319-0c92-41c6-a2f5-97ec64fc2ec6" containerName="nova-manage" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.689805 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.694859 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.702295 4861 scope.go:117] "RemoveContainer" containerID="a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.712932 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n6g4t"] Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.721638 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-config-data\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.721747 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6tp\" (UniqueName: \"kubernetes.io/projected/9211a2d8-8917-464d-a790-efc469302556-kube-api-access-rb6tp\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.721899 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.735466 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n6g4t"] Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.743821 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.748388 4861 scope.go:117] "RemoveContainer" containerID="cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.768353 4861 scope.go:117] "RemoveContainer" containerID="bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.770057 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd\": container with ID starting with bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd not found: ID does not exist" containerID="bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.770094 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd"} err="failed to get container status \"bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd\": rpc error: code = NotFound desc = could not find container \"bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd\": container with ID starting with bfd3107cedf758705fca580b0830425a888fbab078f8a083ec9eaf1956ed11dd not found: ID does not exist" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.770126 4861 scope.go:117] "RemoveContainer" containerID="a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.770649 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3\": container with ID starting with a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3 not found: ID does not exist" containerID="a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.770693 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3"} err="failed to get container status \"a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3\": rpc error: code = NotFound desc = could not find container \"a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3\": container with ID starting with a81172a19e9f16370430b0679ca1d4500af31b38c713341cd98b7b73a2013da3 not found: ID does not exist" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.770722 4861 scope.go:117] "RemoveContainer" containerID="cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b" Feb 19 13:33:18 crc kubenswrapper[4861]: E0219 13:33:18.771167 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b\": container with ID starting with cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b not found: ID does not exist" containerID="cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.771211 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b"} err="failed to get container status \"cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b\": rpc error: code = NotFound desc = could not find container \"cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b\": container with ID starting with cc76fc052de82f7e79740cf2a5454e18d3b441fcd055fb0a7780b547a3548b3b not found: ID does not exist" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.824496 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-config-data\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.824555 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6tp\" (UniqueName: \"kubernetes.io/projected/9211a2d8-8917-464d-a790-efc469302556-kube-api-access-rb6tp\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.824627 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.829983 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.831526 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-config-data\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:18 crc kubenswrapper[4861]: I0219 13:33:18.840529 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6tp\" (UniqueName: \"kubernetes.io/projected/9211a2d8-8917-464d-a790-efc469302556-kube-api-access-rb6tp\") pod \"nova-scheduler-0\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " pod="openstack/nova-scheduler-0" Feb 19 13:33:19 crc kubenswrapper[4861]: I0219 13:33:19.025972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:33:19 crc kubenswrapper[4861]: W0219 13:33:19.512960 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9211a2d8_8917_464d_a790_efc469302556.slice/crio-51f4b662f96e909619e3c94722083f0117a176c16a55ed299c080923c23416cf WatchSource:0}: Error finding container 51f4b662f96e909619e3c94722083f0117a176c16a55ed299c080923c23416cf: Status 404 returned error can't find the container with id 51f4b662f96e909619e3c94722083f0117a176c16a55ed299c080923c23416cf Feb 19 13:33:19 crc kubenswrapper[4861]: I0219 13:33:19.514673 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:33:19 crc kubenswrapper[4861]: I0219 13:33:19.628672 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9211a2d8-8917-464d-a790-efc469302556","Type":"ContainerStarted","Data":"51f4b662f96e909619e3c94722083f0117a176c16a55ed299c080923c23416cf"} Feb 19 13:33:19 crc kubenswrapper[4861]: I0219 13:33:19.990306 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e510220-b836-49e8-a959-f3597f7bca70" path="/var/lib/kubelet/pods/4e510220-b836-49e8-a959-f3597f7bca70/volumes" Feb 19 13:33:19 crc kubenswrapper[4861]: I0219 13:33:19.991439 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea8b77b-5275-4d64-b75c-f3ea30035661" path="/var/lib/kubelet/pods/aea8b77b-5275-4d64-b75c-f3ea30035661/volumes" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.488924 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.504226 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560111 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-combined-ca-bundle\") pod \"54407f31-965d-4510-a875-e96ef076ec7a\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560437 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54334d55-b01e-4310-a442-08b922e35f7c-logs\") pod \"54334d55-b01e-4310-a442-08b922e35f7c\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560478 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-nova-metadata-tls-certs\") pod \"54334d55-b01e-4310-a442-08b922e35f7c\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560505 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdnt\" (UniqueName: \"kubernetes.io/projected/54334d55-b01e-4310-a442-08b922e35f7c-kube-api-access-gjdnt\") pod \"54334d55-b01e-4310-a442-08b922e35f7c\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-combined-ca-bundle\") pod \"54334d55-b01e-4310-a442-08b922e35f7c\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-config-data\") pod \"54334d55-b01e-4310-a442-08b922e35f7c\" (UID: \"54334d55-b01e-4310-a442-08b922e35f7c\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560662 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwnbp\" (UniqueName: \"kubernetes.io/projected/54407f31-965d-4510-a875-e96ef076ec7a-kube-api-access-wwnbp\") pod \"54407f31-965d-4510-a875-e96ef076ec7a\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560687 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-public-tls-certs\") pod \"54407f31-965d-4510-a875-e96ef076ec7a\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560774 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-internal-tls-certs\") pod \"54407f31-965d-4510-a875-e96ef076ec7a\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560826 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-config-data\") pod \"54407f31-965d-4510-a875-e96ef076ec7a\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.560850 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54407f31-965d-4510-a875-e96ef076ec7a-logs\") pod \"54407f31-965d-4510-a875-e96ef076ec7a\" (UID: \"54407f31-965d-4510-a875-e96ef076ec7a\") " Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.561891 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54407f31-965d-4510-a875-e96ef076ec7a-logs" (OuterVolumeSpecName: "logs") pod "54407f31-965d-4510-a875-e96ef076ec7a" (UID: "54407f31-965d-4510-a875-e96ef076ec7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.562993 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54334d55-b01e-4310-a442-08b922e35f7c-logs" (OuterVolumeSpecName: "logs") pod "54334d55-b01e-4310-a442-08b922e35f7c" (UID: "54334d55-b01e-4310-a442-08b922e35f7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.573557 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54334d55-b01e-4310-a442-08b922e35f7c-kube-api-access-gjdnt" (OuterVolumeSpecName: "kube-api-access-gjdnt") pod "54334d55-b01e-4310-a442-08b922e35f7c" (UID: "54334d55-b01e-4310-a442-08b922e35f7c"). InnerVolumeSpecName "kube-api-access-gjdnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.573635 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54407f31-965d-4510-a875-e96ef076ec7a-kube-api-access-wwnbp" (OuterVolumeSpecName: "kube-api-access-wwnbp") pod "54407f31-965d-4510-a875-e96ef076ec7a" (UID: "54407f31-965d-4510-a875-e96ef076ec7a"). InnerVolumeSpecName "kube-api-access-wwnbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.589854 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-config-data" (OuterVolumeSpecName: "config-data") pod "54334d55-b01e-4310-a442-08b922e35f7c" (UID: "54334d55-b01e-4310-a442-08b922e35f7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.592771 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54407f31-965d-4510-a875-e96ef076ec7a" (UID: "54407f31-965d-4510-a875-e96ef076ec7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.593548 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54334d55-b01e-4310-a442-08b922e35f7c" (UID: "54334d55-b01e-4310-a442-08b922e35f7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.601735 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-config-data" (OuterVolumeSpecName: "config-data") pod "54407f31-965d-4510-a875-e96ef076ec7a" (UID: "54407f31-965d-4510-a875-e96ef076ec7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.615761 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "54334d55-b01e-4310-a442-08b922e35f7c" (UID: "54334d55-b01e-4310-a442-08b922e35f7c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.620988 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "54407f31-965d-4510-a875-e96ef076ec7a" (UID: "54407f31-965d-4510-a875-e96ef076ec7a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.640704 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "54407f31-965d-4510-a875-e96ef076ec7a" (UID: "54407f31-965d-4510-a875-e96ef076ec7a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.641934 4861 generic.go:334] "Generic (PLEG): container finished" podID="54407f31-965d-4510-a875-e96ef076ec7a" containerID="b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8" exitCode=0 Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.641991 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54407f31-965d-4510-a875-e96ef076ec7a","Type":"ContainerDied","Data":"b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8"} Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.642018 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54407f31-965d-4510-a875-e96ef076ec7a","Type":"ContainerDied","Data":"430350e3ade87bbdb2e10169231c187270e223d9a4db4a0395c85554ba6bd4a7"} Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.642034 4861 scope.go:117] "RemoveContainer" containerID="b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.642165 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.645865 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9211a2d8-8917-464d-a790-efc469302556","Type":"ContainerStarted","Data":"51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0"} Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.647607 4861 generic.go:334] "Generic (PLEG): container finished" podID="54334d55-b01e-4310-a442-08b922e35f7c" containerID="d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a" exitCode=0 Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.647636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54334d55-b01e-4310-a442-08b922e35f7c","Type":"ContainerDied","Data":"d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a"} Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.647651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54334d55-b01e-4310-a442-08b922e35f7c","Type":"ContainerDied","Data":"aa499393ad247d4e6a861e07c06d4d590593a8bff4090765e7ab5dc259627d41"} Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.647691 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666741 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666804 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwnbp\" (UniqueName: \"kubernetes.io/projected/54407f31-965d-4510-a875-e96ef076ec7a-kube-api-access-wwnbp\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666824 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666844 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666860 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666878 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54407f31-965d-4510-a875-e96ef076ec7a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666894 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54407f31-965d-4510-a875-e96ef076ec7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666908 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54334d55-b01e-4310-a442-08b922e35f7c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666925 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666942 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdnt\" (UniqueName: \"kubernetes.io/projected/54334d55-b01e-4310-a442-08b922e35f7c-kube-api-access-gjdnt\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.666988 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54334d55-b01e-4310-a442-08b922e35f7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.670768 4861 scope.go:117] "RemoveContainer" containerID="b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.676838 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.676815389 podStartE2EDuration="2.676815389s" podCreationTimestamp="2026-02-19 13:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:33:20.665070413 +0000 UTC m=+1415.326173641" watchObservedRunningTime="2026-02-19 13:33:20.676815389 +0000 UTC m=+1415.337918617" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.695560 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.708856 4861 scope.go:117] "RemoveContainer" containerID="b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.708969 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.710113 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8\": container with ID starting with b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8 not found: ID does not exist" containerID="b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.710146 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8"} err="failed to get container status \"b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8\": rpc error: code = NotFound desc = could not find container \"b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8\": container with ID starting with b9dd73985abcf6b0994a059c4de169b8800caaf2ddac06d00b271cb0b65a8bf8 not found: ID does not exist" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.710169 4861 scope.go:117] "RemoveContainer" containerID="b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e" Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.710636 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e\": container with ID starting with b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e not found: ID does not exist" containerID="b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.710709 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e"} err="failed to get container status \"b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e\": rpc error: code = NotFound desc = could not find container \"b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e\": container with ID starting with b65cc219340048126cd184592a0e3df0cd31c91ff34381d80364c19b17b60e0e not found: ID does not exist" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.710749 4861 scope.go:117] "RemoveContainer" containerID="d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.713415 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.724291 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.733307 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.733797 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-api" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.733815 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-api" Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.733830 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-log" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.733837 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-log" Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.733872 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-metadata" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.733880 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-metadata" Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.733896 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-log" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.733902 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-log" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.734077 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-log" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.734092 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="54407f31-965d-4510-a875-e96ef076ec7a" containerName="nova-api-api" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.734114 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-metadata" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.734123 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="54334d55-b01e-4310-a442-08b922e35f7c" containerName="nova-metadata-log" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.735121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.737565 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.737852 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.738063 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.739740 4861 scope.go:117] "RemoveContainer" containerID="59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.755135 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.757300 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769299 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769529 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07533556-6a9f-4844-be7d-f9c9cf8c53a4-logs\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769599 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-config-data\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769689 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nj6w\" (UniqueName: \"kubernetes.io/projected/07533556-6a9f-4844-be7d-f9c9cf8c53a4-kube-api-access-6nj6w\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769767 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.769926 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.770086 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.780787 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.791055 4861 scope.go:117] "RemoveContainer" containerID="d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a" Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.791541 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a\": container with ID starting with d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a not found: ID does not exist" containerID="d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.791578 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a"} err="failed to get container status \"d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a\": rpc error: code = NotFound desc = could not find container \"d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a\": container with ID starting with d4b16db9327b4962b513bb2f6fa13c8694a4187cd19ee9f8fb0f40fb3444206a not found: ID does not exist" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.791604 4861 scope.go:117] "RemoveContainer" containerID="59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0" Feb 19 13:33:20 crc kubenswrapper[4861]: E0219 13:33:20.792027 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0\": container with ID starting with 59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0 not found: ID does not exist" containerID="59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.792057 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0"} err="failed to get container status \"59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0\": rpc error: code = NotFound desc = could not find container \"59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0\": container with ID starting with 59233cce678088a96bb8b27b79b2df708e9988a8ca341c52ef79179bb1a31be0 not found: ID does not exist" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.871774 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07533556-6a9f-4844-be7d-f9c9cf8c53a4-logs\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.871841 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-internal-tls-certs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.871890 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-public-tls-certs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.871928 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-config-data\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872024 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707836a9-478e-4110-b5f5-9ee7e6b46e21-logs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07533556-6a9f-4844-be7d-f9c9cf8c53a4-logs\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872348 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nj6w\" (UniqueName: \"kubernetes.io/projected/07533556-6a9f-4844-be7d-f9c9cf8c53a4-kube-api-access-6nj6w\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872728 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-config-data\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.872842 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8892k\" (UniqueName: \"kubernetes.io/projected/707836a9-478e-4110-b5f5-9ee7e6b46e21-kube-api-access-8892k\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.875770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.875902 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-config-data\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.876914 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.890649 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nj6w\" (UniqueName: \"kubernetes.io/projected/07533556-6a9f-4844-be7d-f9c9cf8c53a4-kube-api-access-6nj6w\") pod \"nova-metadata-0\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " pod="openstack/nova-metadata-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974123 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-config-data\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974201 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974232 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8892k\" (UniqueName: \"kubernetes.io/projected/707836a9-478e-4110-b5f5-9ee7e6b46e21-kube-api-access-8892k\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974279 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-internal-tls-certs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-public-tls-certs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974367 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707836a9-478e-4110-b5f5-9ee7e6b46e21-logs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.974967 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707836a9-478e-4110-b5f5-9ee7e6b46e21-logs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.978538 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-public-tls-certs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.978765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-internal-tls-certs\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.978990 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.980593 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-config-data\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:20 crc kubenswrapper[4861]: I0219 13:33:20.993056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8892k\" (UniqueName: \"kubernetes.io/projected/707836a9-478e-4110-b5f5-9ee7e6b46e21-kube-api-access-8892k\") pod \"nova-api-0\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " pod="openstack/nova-api-0" Feb 19 13:33:21 crc kubenswrapper[4861]: I0219 13:33:21.069638 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:33:21 crc kubenswrapper[4861]: I0219 13:33:21.089852 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:33:21 crc kubenswrapper[4861]: I0219 13:33:21.644412 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:33:21 crc kubenswrapper[4861]: W0219 13:33:21.650710 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707836a9_478e_4110_b5f5_9ee7e6b46e21.slice/crio-d62489fee003ad3856feb1155bf068125930e2a162ff64fde39bc1e2574255ea WatchSource:0}: Error finding container d62489fee003ad3856feb1155bf068125930e2a162ff64fde39bc1e2574255ea: Status 404 returned error can't find the container with id d62489fee003ad3856feb1155bf068125930e2a162ff64fde39bc1e2574255ea Feb 19 13:33:21 crc kubenswrapper[4861]: W0219 13:33:21.747380 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07533556_6a9f_4844_be7d_f9c9cf8c53a4.slice/crio-69706adfca530a88e2e664cc0cda132903e4a72603bd642b34361ec4ade81785 WatchSource:0}: Error finding container 69706adfca530a88e2e664cc0cda132903e4a72603bd642b34361ec4ade81785: Status 404 returned error can't find the container with id 69706adfca530a88e2e664cc0cda132903e4a72603bd642b34361ec4ade81785 Feb 19 13:33:21 crc kubenswrapper[4861]: I0219 13:33:21.753283 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:33:21 crc kubenswrapper[4861]: I0219 13:33:21.989410 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54334d55-b01e-4310-a442-08b922e35f7c" path="/var/lib/kubelet/pods/54334d55-b01e-4310-a442-08b922e35f7c/volumes" Feb 19 13:33:21 crc kubenswrapper[4861]: I0219 13:33:21.990162 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54407f31-965d-4510-a875-e96ef076ec7a" path="/var/lib/kubelet/pods/54407f31-965d-4510-a875-e96ef076ec7a/volumes" Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.677400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07533556-6a9f-4844-be7d-f9c9cf8c53a4","Type":"ContainerStarted","Data":"d751c5da783be93739c9cde1c6a879f363a6be171c0437955267e7fbe56355ab"} Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.677910 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07533556-6a9f-4844-be7d-f9c9cf8c53a4","Type":"ContainerStarted","Data":"fc2859ece05740938f3f6ba76783e5c0e1cd3f33027c89196eab75df118d742b"} Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.677935 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07533556-6a9f-4844-be7d-f9c9cf8c53a4","Type":"ContainerStarted","Data":"69706adfca530a88e2e664cc0cda132903e4a72603bd642b34361ec4ade81785"} Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.683372 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"707836a9-478e-4110-b5f5-9ee7e6b46e21","Type":"ContainerStarted","Data":"7955b99165b940ef4462fa1d533ec8daadebf57e1422d1ab3180cb3a66fc27cb"} Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.683591 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"707836a9-478e-4110-b5f5-9ee7e6b46e21","Type":"ContainerStarted","Data":"ec45a853d859de7712ce8a8df163d92e6be4aa74bd810bb6e043ce0ce739485a"} Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.683634 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"707836a9-478e-4110-b5f5-9ee7e6b46e21","Type":"ContainerStarted","Data":"d62489fee003ad3856feb1155bf068125930e2a162ff64fde39bc1e2574255ea"} Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.725154 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7251266100000002 podStartE2EDuration="2.72512661s" podCreationTimestamp="2026-02-19 13:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:33:22.716727244 +0000 UTC m=+1417.377830472" watchObservedRunningTime="2026-02-19 13:33:22.72512661 +0000 UTC m=+1417.386229868" Feb 19 13:33:22 crc kubenswrapper[4861]: I0219 13:33:22.767369 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.767345177 podStartE2EDuration="2.767345177s" podCreationTimestamp="2026-02-19 13:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:33:22.747747049 +0000 UTC m=+1417.408850317" watchObservedRunningTime="2026-02-19 13:33:22.767345177 +0000 UTC m=+1417.428448445" Feb 19 13:33:24 crc kubenswrapper[4861]: I0219 13:33:24.026878 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 13:33:26 crc kubenswrapper[4861]: I0219 13:33:26.070413 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:33:26 crc kubenswrapper[4861]: I0219 13:33:26.070898 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:33:29 crc kubenswrapper[4861]: I0219 13:33:29.026750 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 13:33:29 crc kubenswrapper[4861]: I0219 13:33:29.075785 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 13:33:29 crc kubenswrapper[4861]: I0219 13:33:29.860660 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 13:33:30 crc kubenswrapper[4861]: I0219 13:33:30.805152 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 13:33:31 crc kubenswrapper[4861]: I0219 13:33:31.070663 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:33:31 crc kubenswrapper[4861]: I0219 13:33:31.070709 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:33:31 crc kubenswrapper[4861]: I0219 13:33:31.091457 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:33:31 crc kubenswrapper[4861]: I0219 13:33:31.091508 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:33:32 crc kubenswrapper[4861]: I0219 13:33:32.082628 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:32 crc kubenswrapper[4861]: I0219 13:33:32.082654 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:32 crc kubenswrapper[4861]: I0219 13:33:32.103621 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:32 crc kubenswrapper[4861]: I0219 13:33:32.103621 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:33:33 crc kubenswrapper[4861]: I0219 13:33:33.834473 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:33:33 crc kubenswrapper[4861]: I0219 13:33:33.834533 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.077779 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.078448 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.086849 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.087904 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.099023 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.099332 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.102457 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.117838 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.940254 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:33:41 crc kubenswrapper[4861]: I0219 13:33:41.947613 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.452943 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m8zt6"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.480959 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hxvnj"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.482074 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.488408 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m8zt6"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.499352 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.509596 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hxvnj"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.578656 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b129-account-create-update-dpq4s"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.580229 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.591671 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.618387 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts\") pod \"root-account-create-update-hxvnj\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.618537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdhrq\" (UniqueName: \"kubernetes.io/projected/ff11b43a-9b7c-42c8-afac-6f66908975dc-kube-api-access-qdhrq\") pod \"root-account-create-update-hxvnj\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.664520 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b129-account-create-update-dpq4s"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.698183 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b129-account-create-update-4fk4r"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.727127 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts\") pod \"root-account-create-update-hxvnj\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.727186 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-operator-scripts\") pod \"glance-b129-account-create-update-dpq4s\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.727295 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdhrq\" (UniqueName: \"kubernetes.io/projected/ff11b43a-9b7c-42c8-afac-6f66908975dc-kube-api-access-qdhrq\") pod \"root-account-create-update-hxvnj\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.727350 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26hw\" (UniqueName: \"kubernetes.io/projected/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-kube-api-access-r26hw\") pod \"glance-b129-account-create-update-dpq4s\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.728038 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts\") pod \"root-account-create-update-hxvnj\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.771727 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdhrq\" (UniqueName: \"kubernetes.io/projected/ff11b43a-9b7c-42c8-afac-6f66908975dc-kube-api-access-qdhrq\") pod \"root-account-create-update-hxvnj\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.812078 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.865132 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26hw\" (UniqueName: \"kubernetes.io/projected/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-kube-api-access-r26hw\") pod \"glance-b129-account-create-update-dpq4s\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.865281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-operator-scripts\") pod \"glance-b129-account-create-update-dpq4s\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.866097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-operator-scripts\") pod \"glance-b129-account-create-update-dpq4s\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.871828 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b129-account-create-update-4fk4r"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.909093 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26hw\" (UniqueName: \"kubernetes.io/projected/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-kube-api-access-r26hw\") pod \"glance-b129-account-create-update-dpq4s\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.937116 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b86e-account-create-update-mp5sk"] Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.959232 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.975747 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mrv\" (UniqueName: \"kubernetes.io/projected/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-kube-api-access-v6mrv\") pod \"placement-b86e-account-create-update-mp5sk\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.975797 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-operator-scripts\") pod \"placement-b86e-account-create-update-mp5sk\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.981436 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:01 crc kubenswrapper[4861]: I0219 13:34:01.986141 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.034761 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb" path="/var/lib/kubelet/pods/c5bc6a04-8a3f-47e4-b3d6-faa4d60914fb/volumes" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.035328 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31587d6-13d9-4e37-9273-423ee0fa9684" path="/var/lib/kubelet/pods/f31587d6-13d9-4e37-9273-423ee0fa9684/volumes" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.035860 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cd89-account-create-update-bnqdw"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.039604 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.044509 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-81eb-account-create-update-g7j6p"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.045986 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.059153 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.059383 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.059497 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd89-account-create-update-bnqdw"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.073365 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b86e-account-create-update-mp5sk"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.077295 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-operator-scripts\") pod \"placement-b86e-account-create-update-mp5sk\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.077478 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mrv\" (UniqueName: \"kubernetes.io/projected/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-kube-api-access-v6mrv\") pod \"placement-b86e-account-create-update-mp5sk\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.082627 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.088540 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-operator-scripts\") pod \"placement-b86e-account-create-update-mp5sk\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.097481 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81eb-account-create-update-g7j6p"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.104099 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.104383 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" containerName="openstackclient" containerID="cri-o://5ed4b552bcd4114fdac5d241534550b1d5ddde0aea8ceaa83bb843a83f153d3e" gracePeriod=2 Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.131907 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mrv\" (UniqueName: \"kubernetes.io/projected/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-kube-api-access-v6mrv\") pod \"placement-b86e-account-create-update-mp5sk\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.153744 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5b8e-account-create-update-5mwlh"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.155141 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.172334 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.182779 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/279a265d-0cc8-45af-82ba-b8a485796fae-operator-scripts\") pod \"neutron-cd89-account-create-update-bnqdw\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.182852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-operator-scripts\") pod \"barbican-81eb-account-create-update-g7j6p\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.182879 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kqs\" (UniqueName: \"kubernetes.io/projected/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-kube-api-access-89kqs\") pod \"barbican-81eb-account-create-update-g7j6p\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.183010 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4bv\" (UniqueName: \"kubernetes.io/projected/279a265d-0cc8-45af-82ba-b8a485796fae-kube-api-access-bf4bv\") pod \"neutron-cd89-account-create-update-bnqdw\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: E0219 13:34:02.183373 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:02 crc kubenswrapper[4861]: E0219 13:34:02.183442 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data podName:fe64a04b-1266-4b02-88e5-191f4a974422 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:02.683402572 +0000 UTC m=+1457.344506050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data") pod "rabbitmq-cell1-server-0" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422") : configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.211659 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.242517 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b8e-account-create-update-5mwlh"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.273493 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cd89-account-create-update-n89tm"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.285348 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd101633-19ce-4277-8ab9-b19319febd08-operator-scripts\") pod \"cinder-5b8e-account-create-update-5mwlh\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.285527 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4bv\" (UniqueName: \"kubernetes.io/projected/279a265d-0cc8-45af-82ba-b8a485796fae-kube-api-access-bf4bv\") pod \"neutron-cd89-account-create-update-bnqdw\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.285657 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/279a265d-0cc8-45af-82ba-b8a485796fae-operator-scripts\") pod \"neutron-cd89-account-create-update-bnqdw\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.285733 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-operator-scripts\") pod \"barbican-81eb-account-create-update-g7j6p\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.285801 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89kqs\" (UniqueName: \"kubernetes.io/projected/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-kube-api-access-89kqs\") pod \"barbican-81eb-account-create-update-g7j6p\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.285951 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bcdd\" (UniqueName: \"kubernetes.io/projected/bd101633-19ce-4277-8ab9-b19319febd08-kube-api-access-5bcdd\") pod \"cinder-5b8e-account-create-update-5mwlh\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.287764 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cd89-account-create-update-n89tm"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.294275 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/279a265d-0cc8-45af-82ba-b8a485796fae-operator-scripts\") pod \"neutron-cd89-account-create-update-bnqdw\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.298126 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-operator-scripts\") pod \"barbican-81eb-account-create-update-g7j6p\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.316690 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.317644 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="openstack-network-exporter" containerID="cri-o://0285cd049de2db17e1e6886951ab9077d608a79a2d4f313cbf4d2ecae7759cb4" gracePeriod=300 Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.337706 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.342249 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4bv\" (UniqueName: \"kubernetes.io/projected/279a265d-0cc8-45af-82ba-b8a485796fae-kube-api-access-bf4bv\") pod \"neutron-cd89-account-create-update-bnqdw\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.343175 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89kqs\" (UniqueName: \"kubernetes.io/projected/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-kube-api-access-89kqs\") pod \"barbican-81eb-account-create-update-g7j6p\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.352013 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b86e-account-create-update-24h55"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.377548 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b86e-account-create-update-24h55"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.387616 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bcdd\" (UniqueName: \"kubernetes.io/projected/bd101633-19ce-4277-8ab9-b19319febd08-kube-api-access-5bcdd\") pod \"cinder-5b8e-account-create-update-5mwlh\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.387671 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd101633-19ce-4277-8ab9-b19319febd08-operator-scripts\") pod \"cinder-5b8e-account-create-update-5mwlh\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.388448 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd101633-19ce-4277-8ab9-b19319febd08-operator-scripts\") pod \"cinder-5b8e-account-create-update-5mwlh\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.415117 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.415767 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="openstack-network-exporter" containerID="cri-o://4ee4dd2ad3f07f52f88fc12ff7db447e44a9a47a1087d34bbe4cd277664e32c4" gracePeriod=300 Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.434213 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.434632 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="ovn-northd" containerID="cri-o://37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6" gracePeriod=30 Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.435049 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="openstack-network-exporter" containerID="cri-o://66481bc6acbb5d53d8e31bc7da07ae265932f327cf54cd5cb7411c629205684f" gracePeriod=30 Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.447300 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bcdd\" (UniqueName: \"kubernetes.io/projected/bd101633-19ce-4277-8ab9-b19319febd08-kube-api-access-5bcdd\") pod \"cinder-5b8e-account-create-update-5mwlh\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.460485 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81eb-account-create-update-7g4tb"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.461604 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.462188 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.481615 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-81eb-account-create-update-7g4tb"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.514705 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.533294 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5b8e-account-create-update-2jw4d"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.572459 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8q5tv"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.610900 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5b8e-account-create-update-2jw4d"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.640345 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8q5tv"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.656954 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="ovsdbserver-nb" containerID="cri-o://295a17d0f366d47e60a15aeae0d2bc62d87b6fb0521772cb5ddf7947d3727a19" gracePeriod=300 Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.661851 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="ovsdbserver-sb" containerID="cri-o://5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8" gracePeriod=300 Feb 19 13:34:02 crc kubenswrapper[4861]: E0219 13:34:02.694950 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:02 crc kubenswrapper[4861]: E0219 13:34:02.695026 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data podName:fe64a04b-1266-4b02-88e5-191f4a974422 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:03.695009934 +0000 UTC m=+1458.356113162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data") pod "rabbitmq-cell1-server-0" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422") : configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.741960 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ae68-account-create-update-xbl4c"] Feb 19 13:34:02 crc kubenswrapper[4861]: E0219 13:34:02.742568 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" containerName="openstackclient" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.742726 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" containerName="openstackclient" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.742988 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" containerName="openstackclient" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.743759 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.866899 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.885155 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ae68-account-create-update-xbl4c"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.974451 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9594-account-create-update-bzndk"] Feb 19 13:34:02 crc kubenswrapper[4861]: I0219 13:34:02.976929 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:02.998270 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-operator-scripts\") pod \"nova-api-ae68-account-create-update-xbl4c\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.001873 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-594dr\" (UniqueName: \"kubernetes.io/projected/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-kube-api-access-594dr\") pod \"nova-api-ae68-account-create-update-xbl4c\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.006864 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.012792 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-c6fh8"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.046845 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.052191 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.052392 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-bzndk"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.091042 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-c6fh8"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.104613 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkpn2\" (UniqueName: \"kubernetes.io/projected/722c1574-0ce6-4d70-87ee-da04a01d79ad-kube-api-access-lkpn2\") pod \"nova-cell0-9594-account-create-update-bzndk\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.104697 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-594dr\" (UniqueName: \"kubernetes.io/projected/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-kube-api-access-594dr\") pod \"nova-api-ae68-account-create-update-xbl4c\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.104785 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-operator-scripts\") pod \"nova-api-ae68-account-create-update-xbl4c\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.104809 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722c1574-0ce6-4d70-87ee-da04a01d79ad-operator-scripts\") pod \"nova-cell0-9594-account-create-update-bzndk\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.117718 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-operator-scripts\") pod \"nova-api-ae68-account-create-update-xbl4c\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.131299 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.155285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-594dr\" (UniqueName: \"kubernetes.io/projected/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-kube-api-access-594dr\") pod \"nova-api-ae68-account-create-update-xbl4c\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.184341 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kwkgq"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.208696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722c1574-0ce6-4d70-87ee-da04a01d79ad-operator-scripts\") pod \"nova-cell0-9594-account-create-update-bzndk\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.208783 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.208874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhsh\" (UniqueName: \"kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.208907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkpn2\" (UniqueName: \"kubernetes.io/projected/722c1574-0ce6-4d70-87ee-da04a01d79ad-kube-api-access-lkpn2\") pod \"nova-cell0-9594-account-create-update-bzndk\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.213981 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722c1574-0ce6-4d70-87ee-da04a01d79ad-operator-scripts\") pod \"nova-cell0-9594-account-create-update-bzndk\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.233515 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kwkgq"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.245470 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ae68-account-create-update-qhg57"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.251907 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkpn2\" (UniqueName: \"kubernetes.io/projected/722c1574-0ce6-4d70-87ee-da04a01d79ad-kube-api-access-lkpn2\") pod \"nova-cell0-9594-account-create-update-bzndk\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.262261 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.265206 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ae68-account-create-update-qhg57"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.289243 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f9zrp"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.290448 4861 generic.go:334] "Generic (PLEG): container finished" podID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerID="66481bc6acbb5d53d8e31bc7da07ae265932f327cf54cd5cb7411c629205684f" exitCode=2 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.290490 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ee6ab7-feb7-4dbd-881a-b8250652aef9","Type":"ContainerDied","Data":"66481bc6acbb5d53d8e31bc7da07ae265932f327cf54cd5cb7411c629205684f"} Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.293155 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8a4affa6-9b49-416a-9887-fdffab32916c/ovsdbserver-sb/0.log" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.293176 4861 generic.go:334] "Generic (PLEG): container finished" podID="8a4affa6-9b49-416a-9887-fdffab32916c" containerID="4ee4dd2ad3f07f52f88fc12ff7db447e44a9a47a1087d34bbe4cd277664e32c4" exitCode=2 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.293189 4861 generic.go:334] "Generic (PLEG): container finished" podID="8a4affa6-9b49-416a-9887-fdffab32916c" containerID="5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8" exitCode=143 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.293217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a4affa6-9b49-416a-9887-fdffab32916c","Type":"ContainerDied","Data":"4ee4dd2ad3f07f52f88fc12ff7db447e44a9a47a1087d34bbe4cd277664e32c4"} Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.293262 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a4affa6-9b49-416a-9887-fdffab32916c","Type":"ContainerDied","Data":"5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8"} Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.311863 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.311931 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhsh\" (UniqueName: \"kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.322723 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.322805 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:03.822784713 +0000 UTC m=+1458.483887941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.323232 4861 projected.go:194] Error preparing data for projected volume kube-api-access-bmhsh for pod openstack/nova-cell1-6dfc-account-create-update-c6fh8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.323273 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:03.823266056 +0000 UTC m=+1458.484369284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bmhsh" (UniqueName: "kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.323571 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c645ced-1599-4f62-ab9b-0e109a7e02c3/ovsdbserver-nb/0.log" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.323614 4861 generic.go:334] "Generic (PLEG): container finished" podID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerID="0285cd049de2db17e1e6886951ab9077d608a79a2d4f313cbf4d2ecae7759cb4" exitCode=2 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.323657 4861 generic.go:334] "Generic (PLEG): container finished" podID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerID="295a17d0f366d47e60a15aeae0d2bc62d87b6fb0521772cb5ddf7947d3727a19" exitCode=143 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.323748 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c645ced-1599-4f62-ab9b-0e109a7e02c3","Type":"ContainerDied","Data":"0285cd049de2db17e1e6886951ab9077d608a79a2d4f313cbf4d2ecae7759cb4"} Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.323776 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c645ced-1599-4f62-ab9b-0e109a7e02c3","Type":"ContainerDied","Data":"295a17d0f366d47e60a15aeae0d2bc62d87b6fb0521772cb5ddf7947d3727a19"} Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.325472 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.325550 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data podName:b117524a-eaad-4666-9e0e-bda909b2ad30 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:03.825531047 +0000 UTC m=+1458.486634275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data") pod "rabbitmq-server-0" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30") : configmap "rabbitmq-config-data" not found Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.341848 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f9zrp"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.343170 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxvnj" event={"ID":"ff11b43a-9b7c-42c8-afac-6f66908975dc","Type":"ContainerStarted","Data":"5c82cea8953d58d2faeed28c754ecc5b9c2f27d71b990cffa6cb46748abd6d56"} Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.389063 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.396718 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-zc7qs"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.411913 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-zc7qs"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.421699 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hblgk"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.428520 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hblgk"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.436592 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-vdzdj"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.453982 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-d4skq"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.469093 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-vdzdj"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.485690 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8bt78"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.485939 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-8bt78" podUID="99cc42c8-6836-4d0e-8d13-6b0a44b2583f" containerName="openstack-network-exporter" containerID="cri-o://9ecb2a5a0a2e89a60f5cb3038c9b421573c3c77bb1ddc2f92c091a4a5d3709ac" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.492684 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q996h"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.518909 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hxvnj"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.536747 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.537037 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-log" containerID="cri-o://a630b33298ce3a0c3f99e814f4e69c3048e04eafde31a67dbaedf03ba600019a" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.537201 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-httpd" containerID="cri-o://87675e94528e8f6860c18ad3e351c725b983857e00216b5245b6f315c839cf6f" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.556512 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-n5s9g"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.571673 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-n5s9g"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.587160 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-znr64"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.587732 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="dnsmasq-dns" containerID="cri-o://c09621d9ce982c78679bd413fb7cb9573fc6a8affe3cae2b67c7d40d0b7c5cd2" gracePeriod=10 Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.650637 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:03 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:03 crc kubenswrapper[4861]: Feb 19 13:34:03 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:03 crc kubenswrapper[4861]: Feb 19 13:34:03 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:03 crc kubenswrapper[4861]: Feb 19 13:34:03 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:03 crc kubenswrapper[4861]: Feb 19 13:34:03 crc kubenswrapper[4861]: if [ -n "glance" ]; then Feb 19 13:34:03 crc kubenswrapper[4861]: GRANT_DATABASE="glance" Feb 19 13:34:03 crc kubenswrapper[4861]: else Feb 19 13:34:03 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:03 crc kubenswrapper[4861]: fi Feb 19 13:34:03 crc kubenswrapper[4861]: Feb 19 13:34:03 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:03 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:03 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:03 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:03 crc kubenswrapper[4861]: # support updates Feb 19 13:34:03 crc kubenswrapper[4861]: Feb 19 13:34:03 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.651759 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-b129-account-create-update-dpq4s" podUID="30a1e51d-a60d-4f7f-8300-9ef99e3da2a6" Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.688856 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8 is running failed: container process not found" containerID="5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.689045 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.689562 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-server" containerID="cri-o://2923944d19b38687e829c0ad91d45d3fa58c574c1be4562c0a63aa47c65877b8" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.689973 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="swift-recon-cron" containerID="cri-o://0ccff9712c241358663a5e8a3f82a05a5ae9907961c0054adefa2af45c1b18a1" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690028 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="rsync" containerID="cri-o://e8e2594239d50333d43b08ef764dd15a54e631448517f1f5b7fde345bc50b2f2" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690074 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-expirer" containerID="cri-o://ef03fa4021e6c6150fbd214140c01f05e06bcc41b0e5602e90af2b70524e58cb" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690106 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-updater" containerID="cri-o://6c5b796933349019a3e6caaca60e24876d6caee6a8db308216a144cc2b4550b5" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690138 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-auditor" containerID="cri-o://da5f6285aca4973ac5f8147c034649ed6d304113cb58cb64d2cca749a0aa466b" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690184 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-replicator" containerID="cri-o://bb2e6ac221defcb0c3b930773236fdbf8bc57fd77635bc49c98f98a881dddf14" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690229 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-server" containerID="cri-o://5695ae030dccbdeb118599a52129cc9b7894cfbff564817156a7fdbf305aa0f0" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690285 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-updater" containerID="cri-o://471372c27c39aaaf08c9ce1b7cd61b51c8ece5bab05fb8d039a85d6af20abc96" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690325 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-auditor" containerID="cri-o://b01b12c10d17a6ede4967ef04b12864fea0898dee251a363634450764aacdd72" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690367 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-replicator" containerID="cri-o://f9f19bdef3fa838ce4b4f8189100aaf397a995f17cf865eefb4002eeec03180e" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690408 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-server" containerID="cri-o://65e1cc2bfc85c23034b910f4d14189e38743412528a40dc9a26a2ca6c1041afb" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690469 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-reaper" containerID="cri-o://70048cfeceaa1bd3b11260d15e755776cfbd5fd6d7ef0d9d90e3d8c6f4261932" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690515 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-auditor" containerID="cri-o://39d17545ee3cebeec079609c07c099110ffc65d4ddb3e462d92e98a8b967e616" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.690562 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-replicator" containerID="cri-o://74434874608f684b61cedbd97fbdb2a90894f4bed2db7e66ba332e8b322c05b2" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.691035 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8 is running failed: container process not found" containerID="5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.719685 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8 is running failed: container process not found" containerID="5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.719794 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="ovsdbserver-sb" Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.724824 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.724881 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data podName:fe64a04b-1266-4b02-88e5-191f4a974422 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:05.724865278 +0000 UTC m=+1460.385968506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data") pod "rabbitmq-cell1-server-0" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422") : configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.752543 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.769559 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.769844 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-log" containerID="cri-o://1a80821a8e4670f6f32d88965fc76093208185ba4852a863d5ea299f7223e873" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.770277 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-httpd" containerID="cri-o://cf188110f03d910f2a512942393ddfa01853575e4682c8b6c95037df3b2b616f" gracePeriod=30 Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.827611 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.827884 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhsh\" (UniqueName: \"kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.828274 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.828373 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data podName:b117524a-eaad-4666-9e0e-bda909b2ad30 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:04.828355752 +0000 UTC m=+1459.489458980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data") pod "rabbitmq-server-0" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30") : configmap "rabbitmq-config-data" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.828688 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.833606 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:04.833589973 +0000 UTC m=+1459.494693201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.833960 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.834025 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.863598 4861 projected.go:194] Error preparing data for projected volume kube-api-access-bmhsh for pod openstack/nova-cell1-6dfc-account-create-update-c6fh8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:03 crc kubenswrapper[4861]: E0219 13:34:03.863669 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:04.863647935 +0000 UTC m=+1459.524751163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bmhsh" (UniqueName: "kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.890454 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8c6n5"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.934925 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-8c6n5"] Feb 19 13:34:03 crc kubenswrapper[4861]: I0219 13:34:03.963453 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tl2p4"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.008303 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0543bf80-4d09-4c45-897d-3b2ae4291861" path="/var/lib/kubelet/pods/0543bf80-4d09-4c45-897d-3b2ae4291861/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.009617 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085bf4b3-5af6-47a0-93b3-0d604f524213" path="/var/lib/kubelet/pods/085bf4b3-5af6-47a0-93b3-0d604f524213/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.010555 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c15a88f-af04-496c-bc54-f001ba15580a" path="/var/lib/kubelet/pods/0c15a88f-af04-496c-bc54-f001ba15580a/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.011213 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48707538-eeb6-42d9-918f-6b22a07cae71" path="/var/lib/kubelet/pods/48707538-eeb6-42d9-918f-6b22a07cae71/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.016544 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d97e359-59fa-474e-9ee0-7306cf96cb15" path="/var/lib/kubelet/pods/5d97e359-59fa-474e-9ee0-7306cf96cb15/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.017673 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9124e9-fceb-43bc-8199-73428ac7e733" path="/var/lib/kubelet/pods/7d9124e9-fceb-43bc-8199-73428ac7e733/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.018302 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d690b0-57b6-4544-9181-32144adaaef5" path="/var/lib/kubelet/pods/a9d690b0-57b6-4544-9181-32144adaaef5/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.019991 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f98e55-e80f-4615-b98a-fffbfc9d19f1" path="/var/lib/kubelet/pods/b9f98e55-e80f-4615-b98a-fffbfc9d19f1/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.022347 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c707a7f2-3143-4979-96e4-23177b810c9e" path="/var/lib/kubelet/pods/c707a7f2-3143-4979-96e4-23177b810c9e/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.029941 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc6223b-76c7-40be-8245-81263bc7c6c6" path="/var/lib/kubelet/pods/cbc6223b-76c7-40be-8245-81263bc7c6c6/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.043646 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfab12c1-cdb5-415f-8290-4d057a940b1a" path="/var/lib/kubelet/pods/cfab12c1-cdb5-415f-8290-4d057a940b1a/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.044865 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b04719-3c5c-48e9-b2d0-84e8111b020b" path="/var/lib/kubelet/pods/d2b04719-3c5c-48e9-b2d0-84e8111b020b/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.054032 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd07d83-8801-49de-a338-879cea293629" path="/var/lib/kubelet/pods/fdd07d83-8801-49de-a338-879cea293629/volumes" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.055000 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tl2p4"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.055043 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b129-account-create-update-dpq4s"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.061363 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "placement" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="placement" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.073535 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-b86e-account-create-update-mp5sk" podUID="3aca02ec-903d-4ddd-a7df-25d323ed6dc1" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.134994 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "neutron" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="neutron" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.146632 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-cd89-account-create-update-bnqdw" podUID="279a265d-0cc8-45af-82ba-b8a485796fae" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.177482 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5kmf"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.192598 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5kmf"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.229495 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sqhgs"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.229542 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sqhgs"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.250966 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6799fd8d6-p6tpl"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.251348 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6799fd8d6-p6tpl" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-log" containerID="cri-o://a6147d4413d0fab04021a29f6c8ca99f658d6f9b5f9f258fb48c889b282281d7" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.251996 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6799fd8d6-p6tpl" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-api" containerID="cri-o://17fcee271c2a499b801142f1f8bd906a26d2c54f3ca073b5f9002a5871100c7a" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.282164 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b129-account-create-update-dpq4s"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.301902 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f585bc76f-dg9rf"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.302151 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f585bc76f-dg9rf" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-api" containerID="cri-o://e596ff917ea1fb5095cf558e3c5f097ddc50829b4c61ec3a615a77087e4cd4bb" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.302496 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f585bc76f-dg9rf" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-httpd" containerID="cri-o://4da13ead2ea9ec2a3cf985ce57e0d64f8641678421b9b3e6a3695e68e35cfeb4" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.312680 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.312906 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api-log" containerID="cri-o://08f5ede146101abfdbe72fa01b651ee0b64dd6fc80f2a9cb3fa76ff9918744f3" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.313119 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api" containerID="cri-o://d730aafec31ebf1d1d4d0bbbdd71e711bc2fd55423001647b8861204d4936465" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.345196 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k5lwj"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.371461 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k5lwj"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.383957 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4qhvt"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.397251 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mphtc"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.408662 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-679b4d4449-j6f75"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.408958 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-679b4d4449-j6f75" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-httpd" containerID="cri-o://2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.409039 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-679b4d4449-j6f75" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-server" containerID="cri-o://a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.419271 4861 generic.go:334] "Generic (PLEG): container finished" podID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerID="a630b33298ce3a0c3f99e814f4e69c3048e04eafde31a67dbaedf03ba600019a" exitCode=143 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.419339 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bce14944-29de-44e7-9ad4-bb056cc6d656","Type":"ContainerDied","Data":"a630b33298ce3a0c3f99e814f4e69c3048e04eafde31a67dbaedf03ba600019a"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.430540 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5b8e-account-create-update-5mwlh"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.444300 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mphtc"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.450432 4861 generic.go:334] "Generic (PLEG): container finished" podID="1da21583-02a3-4a99-a05c-976f017fb31c" containerID="1a80821a8e4670f6f32d88965fc76093208185ba4852a863d5ea299f7223e873" exitCode=143 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.450489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da21583-02a3-4a99-a05c-976f017fb31c","Type":"ContainerDied","Data":"1a80821a8e4670f6f32d88965fc76093208185ba4852a863d5ea299f7223e873"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.451843 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd89-account-create-update-bnqdw" event={"ID":"279a265d-0cc8-45af-82ba-b8a485796fae","Type":"ContainerStarted","Data":"cd6db58d45893f044a771b75c49a1fff9b6d49879fb68fa4936a3c7d3c633863"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.460584 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4qhvt"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.485470 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.485726 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="cinder-scheduler" containerID="cri-o://55e9e83bff1da6a4f3c4c60dbe202de73f4077183db64ccdd0ed5fa347035067" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.486147 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="probe" containerID="cri-o://6726e1fe83695be57e870a22efef89e68bbd009b8791859e1a75a341ca4e9ea7" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.486527 4861 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 19 13:34:04 crc kubenswrapper[4861]: + source /usr/local/bin/container-scripts/functions Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNBridge=br-int Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNRemote=tcp:localhost:6642 Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNEncapType=geneve Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNAvailabilityZones= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ EnableChassisAsGateway=true Feb 19 13:34:04 crc kubenswrapper[4861]: ++ PhysicalNetworks= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNHostName= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 19 13:34:04 crc kubenswrapper[4861]: ++ ovs_dir=/var/lib/openvswitch Feb 19 13:34:04 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 19 13:34:04 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 19 13:34:04 crc kubenswrapper[4861]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 13:34:04 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 13:34:04 crc kubenswrapper[4861]: + sleep 0.5 Feb 19 13:34:04 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 13:34:04 crc kubenswrapper[4861]: + cleanup_ovsdb_server_semaphore Feb 19 13:34:04 crc kubenswrapper[4861]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 13:34:04 crc kubenswrapper[4861]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 19 13:34:04 crc kubenswrapper[4861]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-d4skq" message=< Feb 19 13:34:04 crc kubenswrapper[4861]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 19 13:34:04 crc kubenswrapper[4861]: + source /usr/local/bin/container-scripts/functions Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNBridge=br-int Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNRemote=tcp:localhost:6642 Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNEncapType=geneve Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNAvailabilityZones= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ EnableChassisAsGateway=true Feb 19 13:34:04 crc kubenswrapper[4861]: ++ PhysicalNetworks= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNHostName= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 19 13:34:04 crc kubenswrapper[4861]: ++ ovs_dir=/var/lib/openvswitch Feb 19 13:34:04 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 19 13:34:04 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 19 13:34:04 crc kubenswrapper[4861]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 13:34:04 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 13:34:04 crc kubenswrapper[4861]: + sleep 0.5 Feb 19 13:34:04 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 13:34:04 crc kubenswrapper[4861]: + cleanup_ovsdb_server_semaphore Feb 19 13:34:04 crc kubenswrapper[4861]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 13:34:04 crc kubenswrapper[4861]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 19 13:34:04 crc kubenswrapper[4861]: > Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.488472 4861 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 19 13:34:04 crc kubenswrapper[4861]: + source /usr/local/bin/container-scripts/functions Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNBridge=br-int Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNRemote=tcp:localhost:6642 Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNEncapType=geneve Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNAvailabilityZones= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ EnableChassisAsGateway=true Feb 19 13:34:04 crc kubenswrapper[4861]: ++ PhysicalNetworks= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ OVNHostName= Feb 19 13:34:04 crc kubenswrapper[4861]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 19 13:34:04 crc kubenswrapper[4861]: ++ ovs_dir=/var/lib/openvswitch Feb 19 13:34:04 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 19 13:34:04 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 19 13:34:04 crc kubenswrapper[4861]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 13:34:04 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 13:34:04 crc kubenswrapper[4861]: + sleep 0.5 Feb 19 13:34:04 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 13:34:04 crc kubenswrapper[4861]: + cleanup_ovsdb_server_semaphore Feb 19 13:34:04 crc kubenswrapper[4861]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 13:34:04 crc kubenswrapper[4861]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 19 13:34:04 crc kubenswrapper[4861]: > pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" containerID="cri-o://a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.489089 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" containerID="cri-o://a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" gracePeriod=29 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.493486 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81eb-account-create-update-g7j6p"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509413 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="e8e2594239d50333d43b08ef764dd15a54e631448517f1f5b7fde345bc50b2f2" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509477 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="ef03fa4021e6c6150fbd214140c01f05e06bcc41b0e5602e90af2b70524e58cb" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509487 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="6c5b796933349019a3e6caaca60e24876d6caee6a8db308216a144cc2b4550b5" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509495 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="da5f6285aca4973ac5f8147c034649ed6d304113cb58cb64d2cca749a0aa466b" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509504 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="bb2e6ac221defcb0c3b930773236fdbf8bc57fd77635bc49c98f98a881dddf14" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509513 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="5695ae030dccbdeb118599a52129cc9b7894cfbff564817156a7fdbf305aa0f0" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509523 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="471372c27c39aaaf08c9ce1b7cd61b51c8ece5bab05fb8d039a85d6af20abc96" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509532 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="b01b12c10d17a6ede4967ef04b12864fea0898dee251a363634450764aacdd72" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509540 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="f9f19bdef3fa838ce4b4f8189100aaf397a995f17cf865eefb4002eeec03180e" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509549 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="65e1cc2bfc85c23034b910f4d14189e38743412528a40dc9a26a2ca6c1041afb" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509559 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="70048cfeceaa1bd3b11260d15e755776cfbd5fd6d7ef0d9d90e3d8c6f4261932" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509567 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="39d17545ee3cebeec079609c07c099110ffc65d4ddb3e462d92e98a8b967e616" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509577 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="74434874608f684b61cedbd97fbdb2a90894f4bed2db7e66ba332e8b322c05b2" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509585 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="2923944d19b38687e829c0ad91d45d3fa58c574c1be4562c0a63aa47c65877b8" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509658 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"e8e2594239d50333d43b08ef764dd15a54e631448517f1f5b7fde345bc50b2f2"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509695 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b86e-account-create-update-mp5sk"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509716 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"ef03fa4021e6c6150fbd214140c01f05e06bcc41b0e5602e90af2b70524e58cb"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509730 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"6c5b796933349019a3e6caaca60e24876d6caee6a8db308216a144cc2b4550b5"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509742 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"da5f6285aca4973ac5f8147c034649ed6d304113cb58cb64d2cca749a0aa466b"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509753 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"bb2e6ac221defcb0c3b930773236fdbf8bc57fd77635bc49c98f98a881dddf14"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509766 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"5695ae030dccbdeb118599a52129cc9b7894cfbff564817156a7fdbf305aa0f0"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509781 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"471372c27c39aaaf08c9ce1b7cd61b51c8ece5bab05fb8d039a85d6af20abc96"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509793 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"b01b12c10d17a6ede4967ef04b12864fea0898dee251a363634450764aacdd72"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509804 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"f9f19bdef3fa838ce4b4f8189100aaf397a995f17cf865eefb4002eeec03180e"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509815 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"65e1cc2bfc85c23034b910f4d14189e38743412528a40dc9a26a2ca6c1041afb"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"70048cfeceaa1bd3b11260d15e755776cfbd5fd6d7ef0d9d90e3d8c6f4261932"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"39d17545ee3cebeec079609c07c099110ffc65d4ddb3e462d92e98a8b967e616"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509849 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"74434874608f684b61cedbd97fbdb2a90894f4bed2db7e66ba332e8b322c05b2"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.509862 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"2923944d19b38687e829c0ad91d45d3fa58c574c1be4562c0a63aa47c65877b8"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.517668 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.530018 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xw5x9"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.530580 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" probeResult="failure" output=< Feb 19 13:34:04 crc kubenswrapper[4861]: cat: /var/run/openvswitch/ovs-vswitchd.pid: No such file or directory Feb 19 13:34:04 crc kubenswrapper[4861]: ERROR - Failed to get pid for ovs-vswitchd, exit status: 0 Feb 19 13:34:04 crc kubenswrapper[4861]: > Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.532643 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "neutron" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="neutron" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.533922 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-cd89-account-create-update-bnqdw" podUID="279a265d-0cc8-45af-82ba-b8a485796fae" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.535635 4861 generic.go:334] "Generic (PLEG): container finished" podID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerID="c09621d9ce982c78679bd413fb7cb9573fc6a8affe3cae2b67c7d40d0b7c5cd2" exitCode=0 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.535699 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" event={"ID":"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0","Type":"ContainerDied","Data":"c09621d9ce982c78679bd413fb7cb9573fc6a8affe3cae2b67c7d40d0b7c5cd2"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.542464 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xw5x9"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.558443 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.567785 4861 generic.go:334] "Generic (PLEG): container finished" podID="4524053f-0367-4216-8916-1b5315dbe8d8" containerID="5ed4b552bcd4114fdac5d241534550b1d5ddde0aea8ceaa83bb843a83f153d3e" exitCode=137 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.576785 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b129-account-create-update-dpq4s" event={"ID":"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6","Type":"ContainerStarted","Data":"a08803a521a4310cbea351afd5bf97a7a21ab8d212eacae3860716347a664ebc"} Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.582468 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "glance" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="glance" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.582567 4861 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 19 13:34:04 crc kubenswrapper[4861]: fail startup Feb 19 13:34:04 crc kubenswrapper[4861]: , stdout: , stderr: , exit code -1 Feb 19 13:34:04 crc kubenswrapper[4861]: > containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.583717 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cd89-account-create-update-bnqdw"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.584626 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-b129-account-create-update-dpq4s" podUID="30a1e51d-a60d-4f7f-8300-9ef99e3da2a6" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.584826 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.585123 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.585151 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.585368 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c645ced-1599-4f62-ab9b-0e109a7e02c3/ovsdbserver-nb/0.log" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.585529 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c645ced-1599-4f62-ab9b-0e109a7e02c3","Type":"ContainerDied","Data":"fb8b9a12794a41cc360324059eea31d8acab394a4b5ab59bcaf457ba030ee34c"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.585602 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb8b9a12794a41cc360324059eea31d8acab394a4b5ab59bcaf457ba030ee34c" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.600538 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" containerName="rabbitmq" containerID="cri-o://d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81" gracePeriod=604800 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.615335 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-92l52"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.615443 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-92l52"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.624166 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.624694 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-log" containerID="cri-o://fc2859ece05740938f3f6ba76783e5c0e1cd3f33027c89196eab75df118d742b" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.627149 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-metadata" containerID="cri-o://d751c5da783be93739c9cde1c6a879f363a6be171c0437955267e7fbe56355ab" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.629513 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-c6fh8"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.635640 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bmhsh operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" podUID="13af2f92-2b45-4bca-925f-91e0d4102a56" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.639534 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.639838 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-log" containerID="cri-o://ec45a853d859de7712ce8a8df163d92e6be4aa74bd810bb6e043ce0ce739485a" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.639946 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" containerID="cri-o://2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" gracePeriod=29 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.640288 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-api" containerID="cri-o://7955b99165b940ef4462fa1d533ec8daadebf57e1422d1ab3180cb3a66fc27cb" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.647222 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b86e-account-create-update-mp5sk"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.660398 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8bt78_99cc42c8-6836-4d0e-8d13-6b0a44b2583f/openstack-network-exporter/0.log" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.660456 4861 generic.go:334] "Generic (PLEG): container finished" podID="99cc42c8-6836-4d0e-8d13-6b0a44b2583f" containerID="9ecb2a5a0a2e89a60f5cb3038c9b421573c3c77bb1ddc2f92c091a4a5d3709ac" exitCode=2 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.660516 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8bt78" event={"ID":"99cc42c8-6836-4d0e-8d13-6b0a44b2583f","Type":"ContainerDied","Data":"9ecb2a5a0a2e89a60f5cb3038c9b421573c3c77bb1ddc2f92c091a4a5d3709ac"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.670664 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ae68-account-create-update-xbl4c"] Feb 19 13:34:04 crc kubenswrapper[4861]: W0219 13:34:04.674637 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafc3eb95_9d8a_449a_937d_7db3d3dd3d69.slice/crio-3ff6b769d3d07f03d352a6b7158708a709a0f425884599355ea85adb2648a1bc WatchSource:0}: Error finding container 3ff6b769d3d07f03d352a6b7158708a709a0f425884599355ea85adb2648a1bc: Status 404 returned error can't find the container with id 3ff6b769d3d07f03d352a6b7158708a709a0f425884599355ea85adb2648a1bc Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.674962 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c645ced-1599-4f62-ab9b-0e109a7e02c3/ovsdbserver-nb/0.log" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.675029 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.675114 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8a4affa6-9b49-416a-9887-fdffab32916c/ovsdbserver-sb/0.log" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.675195 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.676790 4861 generic.go:334] "Generic (PLEG): container finished" podID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerID="664e37dab2fb0274c430251d7675644e7a1659b00349bf1be78d0e03e076d739" exitCode=1 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.676845 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxvnj" event={"ID":"ff11b43a-9b7c-42c8-afac-6f66908975dc","Type":"ContainerDied","Data":"664e37dab2fb0274c430251d7675644e7a1659b00349bf1be78d0e03e076d739"} Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.676953 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.677590 4861 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-hxvnj" secret="" err="secret \"galera-openstack-cell1-dockercfg-kz52f\" not found" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.677624 4861 scope.go:117] "RemoveContainer" containerID="664e37dab2fb0274c430251d7675644e7a1659b00349bf1be78d0e03e076d739" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.680788 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b86e-account-create-update-mp5sk" event={"ID":"3aca02ec-903d-4ddd-a7df-25d323ed6dc1","Type":"ContainerStarted","Data":"b39eb0baa81572d0890054255fb8e31771e7a7a69f4994059a646e592bbedfc6"} Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.681815 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "placement" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="placement" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.686975 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-b86e-account-create-update-mp5sk" podUID="3aca02ec-903d-4ddd-a7df-25d323ed6dc1" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.687043 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-bzndk"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.712062 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cd89-account-create-update-bnqdw"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.721034 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "cinder" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="cinder" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.731126 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:04 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: if [ -n "barbican" ]; then Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="barbican" Feb 19 13:34:04 crc kubenswrapper[4861]: else Feb 19 13:34:04 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:04 crc kubenswrapper[4861]: fi Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:04 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:04 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:04 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:04 crc kubenswrapper[4861]: # support updates Feb 19 13:34:04 crc kubenswrapper[4861]: Feb 19 13:34:04 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.731193 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-5b8e-account-create-update-5mwlh" podUID="bd101633-19ce-4277-8ab9-b19319febd08" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.732869 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-81eb-account-create-update-g7j6p" podUID="afc3eb95-9d8a-449a-937d-7db3d3dd3d69" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733289 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733309 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-scripts\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733326 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-combined-ca-bundle\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733344 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb64k\" (UniqueName: \"kubernetes.io/projected/8a4affa6-9b49-416a-9887-fdffab32916c-kube-api-access-rb64k\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733369 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-nb\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733387 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733409 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-config\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733467 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-config\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-config\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733507 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-combined-ca-bundle\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733533 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdb-rundir\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733561 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-metrics-certs-tls-certs\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733575 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-svc\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733597 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-scripts\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733611 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-sb\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733628 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-metrics-certs-tls-certs\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733643 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvdpg\" (UniqueName: \"kubernetes.io/projected/2c645ced-1599-4f62-ab9b-0e109a7e02c3-kube-api-access-fvdpg\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733670 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdb-rundir\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733697 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdbserver-sb-tls-certs\") pod \"8a4affa6-9b49-416a-9887-fdffab32916c\" (UID: \"8a4affa6-9b49-416a-9887-fdffab32916c\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733723 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpj2s\" (UniqueName: \"kubernetes.io/projected/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-kube-api-access-cpj2s\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.733738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdbserver-nb-tls-certs\") pod \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\" (UID: \"2c645ced-1599-4f62-ab9b-0e109a7e02c3\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.746384 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-config" (OuterVolumeSpecName: "config") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.749255 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.749689 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.749734 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts podName:ff11b43a-9b7c-42c8-afac-6f66908975dc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:05.249717368 +0000 UTC m=+1459.910820596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts") pod "root-account-create-update-hxvnj" (UID: "ff11b43a-9b7c-42c8-afac-6f66908975dc") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.751119 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.751774 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.755309 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.756489 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-scripts" (OuterVolumeSpecName: "scripts") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.769020 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-scripts" (OuterVolumeSpecName: "scripts") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.771181 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6ffbcbb99f-phcxs"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.771400 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker-log" containerID="cri-o://53911b7e6ee036738f82d06e28457c9efb4b7e608b7a20ad34bd125adf651646" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.771536 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker" containerID="cri-o://fc0693a9e1476f2b6d033af8f56ce772a2fac61eb55bf48764b0906664653ac4" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.772039 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-config" (OuterVolumeSpecName: "config") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.821977 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.826681 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4affa6-9b49-416a-9887-fdffab32916c-kube-api-access-rb64k" (OuterVolumeSpecName: "kube-api-access-rb64k") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "kube-api-access-rb64k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.833797 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8bt78_99cc42c8-6836-4d0e-8d13-6b0a44b2583f/openstack-network-exporter/0.log" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.833867 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.838721 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-545d79c874-vmrzt"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.839096 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener-log" containerID="cri-o://5e5b1d9b0913f678bfeffc302d49785832edca29f1e96866dc26ab6c9f4872d5" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.839293 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener" containerID="cri-o://1d619313cf3eb9116f4f061ab19e9d256b6f4c3706035768630ec087a8ab9bd7" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.843987 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-kube-api-access-cpj2s" (OuterVolumeSpecName: "kube-api-access-cpj2s") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "kube-api-access-cpj2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.858093 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjj9s\" (UniqueName: \"kubernetes.io/projected/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-kube-api-access-cjj9s\") pod \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.858237 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovs-rundir\") pod \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.858270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-combined-ca-bundle\") pod \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.858594 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovn-rundir\") pod \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.858641 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-config\") pod \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.858698 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-metrics-certs-tls-certs\") pod \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\" (UID: \"99cc42c8-6836-4d0e-8d13-6b0a44b2583f\") " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859312 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859543 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859560 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859575 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a4affa6-9b49-416a-9887-fdffab32916c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859584 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859594 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpj2s\" (UniqueName: \"kubernetes.io/projected/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-kube-api-access-cpj2s\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859604 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c645ced-1599-4f62-ab9b-0e109a7e02c3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859644 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb64k\" (UniqueName: \"kubernetes.io/projected/8a4affa6-9b49-416a-9887-fdffab32916c-kube-api-access-rb64k\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859668 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.859712 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.865812 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "99cc42c8-6836-4d0e-8d13-6b0a44b2583f" (UID: "99cc42c8-6836-4d0e-8d13-6b0a44b2583f"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.866164 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "99cc42c8-6836-4d0e-8d13-6b0a44b2583f" (UID: "99cc42c8-6836-4d0e-8d13-6b0a44b2583f"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.866259 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.866310 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:06.866285784 +0000 UTC m=+1461.527389012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.867048 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.867136 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data podName:b117524a-eaad-4666-9e0e-bda909b2ad30 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:06.867113327 +0000 UTC m=+1461.528216555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data") pod "rabbitmq-server-0" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30") : configmap "rabbitmq-config-data" not found Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.867177 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hng7s"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.870397 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-config" (OuterVolumeSpecName: "config") pod "99cc42c8-6836-4d0e-8d13-6b0a44b2583f" (UID: "99cc42c8-6836-4d0e-8d13-6b0a44b2583f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.911284 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c645ced-1599-4f62-ab9b-0e109a7e02c3-kube-api-access-fvdpg" (OuterVolumeSpecName: "kube-api-access-fvdpg") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "kube-api-access-fvdpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.918633 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4xvh5"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.918892 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d6c467cc6-ng4wh"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.920554 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d6c467cc6-ng4wh" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api-log" containerID="cri-o://776ff5a17e90ebc21d49721478bc41e7146bdd38de85dd86200078fc345273f3" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.921434 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d6c467cc6-ng4wh" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api" containerID="cri-o://851c26a783d4f2fb239877063c2d4732d081998faf87a9a0897c6af79d389cda" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.923671 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-kube-api-access-cjj9s" (OuterVolumeSpecName: "kube-api-access-cjj9s") pod "99cc42c8-6836-4d0e-8d13-6b0a44b2583f" (UID: "99cc42c8-6836-4d0e-8d13-6b0a44b2583f"). InnerVolumeSpecName "kube-api-access-cjj9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.930896 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4xvh5"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.931502 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-679b4d4449-j6f75" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.174:8080/healthcheck\": dial tcp 10.217.0.174:8080: connect: connection refused" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.931580 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-679b4d4449-j6f75" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.174:8080/healthcheck\": dial tcp 10.217.0.174:8080: connect: connection refused" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.945751 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hng7s"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.953025 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hxvnj"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.959627 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.959896 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://531ccdafb0d0612a0c1e667d57b0657cf8216937bc4a152030d92ea096cbe11a" gracePeriod=30 Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.961085 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhsh\" (UniqueName: \"kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.961292 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.961312 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.961321 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjj9s\" (UniqueName: \"kubernetes.io/projected/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-kube-api-access-cjj9s\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.961331 4861 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.961340 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvdpg\" (UniqueName: \"kubernetes.io/projected/2c645ced-1599-4f62-ab9b-0e109a7e02c3-kube-api-access-fvdpg\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:04 crc kubenswrapper[4861]: I0219 13:34:04.968591 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.974863 4861 projected.go:194] Error preparing data for projected volume kube-api-access-bmhsh for pod openstack/nova-cell1-6dfc-account-create-update-c6fh8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:04 crc kubenswrapper[4861]: E0219 13:34:04.974923 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:06.974906597 +0000 UTC m=+1461.636009825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bmhsh" (UniqueName: "kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.016004 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81eb-account-create-update-g7j6p"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.028188 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.028380 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9211a2d8-8917-464d-a790-efc469302556" containerName="nova-scheduler-scheduler" containerID="cri-o://51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0" gracePeriod=30 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.035951 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.057909 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5b8e-account-create-update-5mwlh"] Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.060219 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:05 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: if [ -n "nova_api" ]; then Feb 19 13:34:05 crc kubenswrapper[4861]: GRANT_DATABASE="nova_api" Feb 19 13:34:05 crc kubenswrapper[4861]: else Feb 19 13:34:05 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:05 crc kubenswrapper[4861]: fi Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:05 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:05 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:05 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:05 crc kubenswrapper[4861]: # support updates Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.061308 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-ae68-account-create-update-xbl4c" podUID="1eb2f892-b8f2-423d-b7fa-bed98cf7683a" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.070822 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.083748 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="rabbitmq" containerID="cri-o://a34aea6a9dce7447619085b8bdcc194d614605d384336f31f474bb36345d67a2" gracePeriod=604800 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.088951 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsd6v"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.105081 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.105374 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsd6v"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.137802 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.140639 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" containerName="nova-cell1-conductor-conductor" containerID="cri-o://571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f" gracePeriod=30 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.175380 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.175601 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0\") pod \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\" (UID: \"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0\") " Feb 19 13:34:05 crc kubenswrapper[4861]: W0219 13:34:05.176193 4861 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.176210 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.176811 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.176888 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.185208 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.185837 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.186054 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c26363be-cfa7-49f5-82a2-709c67b44622" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b32d612f73bbe86a4cf54136585330c0e0f86c939e661fda48a99f88a3862277" gracePeriod=30 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.197190 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4q89d"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.210686 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4q89d"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.211292 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.214810 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerName="galera" containerID="cri-o://362173b173d50713e668b40267cc089e3476fe407aa787eab8629d823b7bab2c" gracePeriod=30 Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.242230 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:05 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: if [ -n "nova_cell0" ]; then Feb 19 13:34:05 crc kubenswrapper[4861]: GRANT_DATABASE="nova_cell0" Feb 19 13:34:05 crc kubenswrapper[4861]: else Feb 19 13:34:05 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:05 crc kubenswrapper[4861]: fi Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:05 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:05 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:05 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:05 crc kubenswrapper[4861]: # support updates Feb 19 13:34:05 crc kubenswrapper[4861]: Feb 19 13:34:05 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.243694 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-9594-account-create-update-bzndk" podUID="722c1574-0ce6-4d70-87ee-da04a01d79ad" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.250079 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ae68-account-create-update-xbl4c"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.265794 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.270691 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-bzndk"] Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.278789 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.278813 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.278832 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.278884 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.278942 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts podName:ff11b43a-9b7c-42c8-afac-6f66908975dc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:06.278925145 +0000 UTC m=+1460.940028373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts") pod "root-account-create-update-hxvnj" (UID: "ff11b43a-9b7c-42c8-afac-6f66908975dc") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.291330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-config" (OuterVolumeSpecName: "config") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.357130 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.371909 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99cc42c8-6836-4d0e-8d13-6b0a44b2583f" (UID: "99cc42c8-6836-4d0e-8d13-6b0a44b2583f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.380807 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.381003 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.381059 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.401319 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.407946 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "99cc42c8-6836-4d0e-8d13-6b0a44b2583f" (UID: "99cc42c8-6836-4d0e-8d13-6b0a44b2583f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.409160 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.412488 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2c645ced-1599-4f62-ab9b-0e109a7e02c3" (UID: "2c645ced-1599-4f62-ab9b-0e109a7e02c3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.415247 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" (UID: "e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.481194 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8a4affa6-9b49-416a-9887-fdffab32916c" (UID: "8a4affa6-9b49-416a-9887-fdffab32916c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.482702 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99cc42c8-6836-4d0e-8d13-6b0a44b2583f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.482726 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.482734 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.482765 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4affa6-9b49-416a-9887-fdffab32916c-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.482774 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c645ced-1599-4f62-ab9b-0e109a7e02c3-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.482783 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.485847 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.590867 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config\") pod \"4524053f-0367-4216-8916-1b5315dbe8d8\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.590948 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsnvs\" (UniqueName: \"kubernetes.io/projected/4524053f-0367-4216-8916-1b5315dbe8d8-kube-api-access-xsnvs\") pod \"4524053f-0367-4216-8916-1b5315dbe8d8\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.591107 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-combined-ca-bundle\") pod \"4524053f-0367-4216-8916-1b5315dbe8d8\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.591176 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config-secret\") pod \"4524053f-0367-4216-8916-1b5315dbe8d8\" (UID: \"4524053f-0367-4216-8916-1b5315dbe8d8\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.608151 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4524053f-0367-4216-8916-1b5315dbe8d8-kube-api-access-xsnvs" (OuterVolumeSpecName: "kube-api-access-xsnvs") pod "4524053f-0367-4216-8916-1b5315dbe8d8" (UID: "4524053f-0367-4216-8916-1b5315dbe8d8"). InnerVolumeSpecName "kube-api-access-xsnvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.618072 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4524053f-0367-4216-8916-1b5315dbe8d8" (UID: "4524053f-0367-4216-8916-1b5315dbe8d8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.635523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4524053f-0367-4216-8916-1b5315dbe8d8" (UID: "4524053f-0367-4216-8916-1b5315dbe8d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.693622 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.693658 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsnvs\" (UniqueName: \"kubernetes.io/projected/4524053f-0367-4216-8916-1b5315dbe8d8-kube-api-access-xsnvs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.693668 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.705868 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.709719 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4524053f-0367-4216-8916-1b5315dbe8d8" (UID: "4524053f-0367-4216-8916-1b5315dbe8d8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.756126 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" event={"ID":"e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0","Type":"ContainerDied","Data":"788d4b4f82b0a608949e40ba653f9e5543d3c723af89e22de1431d976fe0ac11"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.756864 4861 scope.go:117] "RemoveContainer" containerID="c09621d9ce982c78679bd413fb7cb9573fc6a8affe3cae2b67c7d40d0b7c5cd2" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.756175 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-znr64" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.764613 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9594-account-create-update-bzndk" event={"ID":"722c1574-0ce6-4d70-87ee-da04a01d79ad","Type":"ContainerStarted","Data":"b415b38f2eb82a4005ccecd42dd86d6fc9ed92633fb82e8c700502b66ab77a64"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796547 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwxxx\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-kube-api-access-vwxxx\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796607 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-combined-ca-bundle\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-run-httpd\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796775 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-internal-tls-certs\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796840 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-etc-swift\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796867 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-log-httpd\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796926 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-config-data\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.796948 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-public-tls-certs\") pod \"c3559fea-5929-4904-9be2-136f10ea1023\" (UID: \"c3559fea-5929-4904-9be2-136f10ea1023\") " Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.797399 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4524053f-0367-4216-8916-1b5315dbe8d8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.797490 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:05 crc kubenswrapper[4861]: E0219 13:34:05.797546 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data podName:fe64a04b-1266-4b02-88e5-191f4a974422 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:09.797528186 +0000 UTC m=+1464.458631404 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data") pod "rabbitmq-cell1-server-0" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422") : configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.807517 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-kube-api-access-vwxxx" (OuterVolumeSpecName: "kube-api-access-vwxxx") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "kube-api-access-vwxxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.808123 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.809365 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae68-account-create-update-xbl4c" event={"ID":"1eb2f892-b8f2-423d-b7fa-bed98cf7683a","Type":"ContainerStarted","Data":"9450544c5af47e61c80ff25007ec72dff701ac458c7f36413cf2ead45f08a8e9"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.816938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.817499 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.818866 4861 generic.go:334] "Generic (PLEG): container finished" podID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerID="08f5ede146101abfdbe72fa01b651ee0b64dd6fc80f2a9cb3fa76ff9918744f3" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.818958 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b863561a-440f-4e92-a8f3-4786a24d0a5f","Type":"ContainerDied","Data":"08f5ede146101abfdbe72fa01b651ee0b64dd6fc80f2a9cb3fa76ff9918744f3"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.832309 4861 generic.go:334] "Generic (PLEG): container finished" podID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerID="53911b7e6ee036738f82d06e28457c9efb4b7e608b7a20ad34bd125adf651646" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.832384 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" event={"ID":"a4307ff9-78bb-48ec-8096-6e06ff22e19b","Type":"ContainerDied","Data":"53911b7e6ee036738f82d06e28457c9efb4b7e608b7a20ad34bd125adf651646"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.872669 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.890653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81eb-account-create-update-g7j6p" event={"ID":"afc3eb95-9d8a-449a-937d-7db3d3dd3d69","Type":"ContainerStarted","Data":"3ff6b769d3d07f03d352a6b7158708a709a0f425884599355ea85adb2648a1bc"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.897399 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.900917 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwxxx\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-kube-api-access-vwxxx\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.900956 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.900969 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.900989 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3559fea-5929-4904-9be2-136f10ea1023-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.900999 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3559fea-5929-4904-9be2-136f10ea1023-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.901697 4861 generic.go:334] "Generic (PLEG): container finished" podID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" exitCode=0 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.901799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerDied","Data":"a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.918126 4861 generic.go:334] "Generic (PLEG): container finished" podID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerID="fc2859ece05740938f3f6ba76783e5c0e1cd3f33027c89196eab75df118d742b" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.918206 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07533556-6a9f-4844-be7d-f9c9cf8c53a4","Type":"ContainerDied","Data":"fc2859ece05740938f3f6ba76783e5c0e1cd3f33027c89196eab75df118d742b"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.922091 4861 generic.go:334] "Generic (PLEG): container finished" podID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerID="4da13ead2ea9ec2a3cf985ce57e0d64f8641678421b9b3e6a3695e68e35cfeb4" exitCode=0 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.922132 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f585bc76f-dg9rf" event={"ID":"cdaa2d03-6ae0-405a-af42-499d99ec711d","Type":"ContainerDied","Data":"4da13ead2ea9ec2a3cf985ce57e0d64f8641678421b9b3e6a3695e68e35cfeb4"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.924021 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.924315 4861 generic.go:334] "Generic (PLEG): container finished" podID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerID="6726e1fe83695be57e870a22efef89e68bbd009b8791859e1a75a341ca4e9ea7" exitCode=0 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.924356 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a","Type":"ContainerDied","Data":"6726e1fe83695be57e870a22efef89e68bbd009b8791859e1a75a341ca4e9ea7"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.925842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b8e-account-create-update-5mwlh" event={"ID":"bd101633-19ce-4277-8ab9-b19319febd08","Type":"ContainerStarted","Data":"090182daad27a162548d65b8749454c4afc1537264b0c85cb5217fee670e43c5"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.930072 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8bt78_99cc42c8-6836-4d0e-8d13-6b0a44b2583f/openstack-network-exporter/0.log" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.930122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8bt78" event={"ID":"99cc42c8-6836-4d0e-8d13-6b0a44b2583f","Type":"ContainerDied","Data":"b141b0c9e6aa947ddd6e7a375e650503f18da8815fc88e2a8857a8ce200f2e1b"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.930173 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8bt78" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.937006 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxvnj" event={"ID":"ff11b43a-9b7c-42c8-afac-6f66908975dc","Type":"ContainerStarted","Data":"732b4cff15c44d4403f193f10fcf2bd5b6f0bb23a75a23786912b65d5492fa98"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.944131 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerID="5e5b1d9b0913f678bfeffc302d49785832edca29f1e96866dc26ab6c9f4872d5" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.944189 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" event={"ID":"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef","Type":"ContainerDied","Data":"5e5b1d9b0913f678bfeffc302d49785832edca29f1e96866dc26ab6c9f4872d5"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.945600 4861 generic.go:334] "Generic (PLEG): container finished" podID="9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" containerID="531ccdafb0d0612a0c1e667d57b0657cf8216937bc4a152030d92ea096cbe11a" exitCode=0 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.945634 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532","Type":"ContainerDied","Data":"531ccdafb0d0612a0c1e667d57b0657cf8216937bc4a152030d92ea096cbe11a"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.947775 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8a4affa6-9b49-416a-9887-fdffab32916c/ovsdbserver-sb/0.log" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.947826 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a4affa6-9b49-416a-9887-fdffab32916c","Type":"ContainerDied","Data":"e039c4b138a6eaa12993206a39ae44ea78fb1fb90f122fca0898f1c84cfbbc18"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.947903 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.967508 4861 generic.go:334] "Generic (PLEG): container finished" podID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerID="a6147d4413d0fab04021a29f6c8ca99f658d6f9b5f9f258fb48c889b282281d7" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.967605 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6799fd8d6-p6tpl" event={"ID":"46d0ac5c-1d20-4b80-be1b-21ad2641b215","Type":"ContainerDied","Data":"a6147d4413d0fab04021a29f6c8ca99f658d6f9b5f9f258fb48c889b282281d7"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.981883 4861 generic.go:334] "Generic (PLEG): container finished" podID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerID="ec45a853d859de7712ce8a8df163d92e6be4aa74bd810bb6e043ce0ce739485a" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.982006 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"707836a9-478e-4110-b5f5-9ee7e6b46e21","Type":"ContainerDied","Data":"ec45a853d859de7712ce8a8df163d92e6be4aa74bd810bb6e043ce0ce739485a"} Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.987773 4861 generic.go:334] "Generic (PLEG): container finished" podID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerID="776ff5a17e90ebc21d49721478bc41e7146bdd38de85dd86200078fc345273f3" exitCode=143 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.993967 4861 generic.go:334] "Generic (PLEG): container finished" podID="c3559fea-5929-4904-9be2-136f10ea1023" containerID="a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14" exitCode=0 Feb 19 13:34:05 crc kubenswrapper[4861]: I0219 13:34:05.994001 4861 generic.go:334] "Generic (PLEG): container finished" podID="c3559fea-5929-4904-9be2-136f10ea1023" containerID="2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a" exitCode=0 Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.004979 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.051700 4861 scope.go:117] "RemoveContainer" containerID="2d9219abd0b0183e6794131291b2740df409ae7257dd6d6efe7affed7f803e8b" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.054679 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-679b4d4449-j6f75" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.055556 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.055848 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.066721 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:06 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:06 crc kubenswrapper[4861]: Feb 19 13:34:06 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:06 crc kubenswrapper[4861]: Feb 19 13:34:06 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:06 crc kubenswrapper[4861]: Feb 19 13:34:06 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:06 crc kubenswrapper[4861]: Feb 19 13:34:06 crc kubenswrapper[4861]: if [ -n "neutron" ]; then Feb 19 13:34:06 crc kubenswrapper[4861]: GRANT_DATABASE="neutron" Feb 19 13:34:06 crc kubenswrapper[4861]: else Feb 19 13:34:06 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:06 crc kubenswrapper[4861]: fi Feb 19 13:34:06 crc kubenswrapper[4861]: Feb 19 13:34:06 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:06 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:06 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:06 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:06 crc kubenswrapper[4861]: # support updates Feb 19 13:34:06 crc kubenswrapper[4861]: Feb 19 13:34:06 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.068578 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-cd89-account-create-update-bnqdw" podUID="279a265d-0cc8-45af-82ba-b8a485796fae" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.076175 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b85ee6-f9f6-4f1e-8fc9-23072e437a14" path="/var/lib/kubelet/pods/34b85ee6-f9f6-4f1e-8fc9-23072e437a14/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.076790 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411cd56f-4fb3-4f9b-9cfe-e287f22a4609" path="/var/lib/kubelet/pods/411cd56f-4fb3-4f9b-9cfe-e287f22a4609/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.080809 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.083833 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4524053f-0367-4216-8916-1b5315dbe8d8" path="/var/lib/kubelet/pods/4524053f-0367-4216-8916-1b5315dbe8d8/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.084657 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddffdeb-5390-498e-bed8-e72fe5934034" path="/var/lib/kubelet/pods/5ddffdeb-5390-498e-bed8-e72fe5934034/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.085193 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67656319-0c92-41c6-a2f5-97ec64fc2ec6" path="/var/lib/kubelet/pods/67656319-0c92-41c6-a2f5-97ec64fc2ec6/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.086409 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0a1119-88df-47db-9b04-87d908da605d" path="/var/lib/kubelet/pods/9a0a1119-88df-47db-9b04-87d908da605d/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.087045 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49adc4e-23bf-4d7c-9c65-6ad99ee77551" path="/var/lib/kubelet/pods/a49adc4e-23bf-4d7c-9c65-6ad99ee77551/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.087640 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d43591-2a09-426b-b8e3-9f21dd334f70" path="/var/lib/kubelet/pods/a4d43591-2a09-426b-b8e3-9f21dd334f70/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.088192 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee09866-fa7a-4558-af5c-992b2ae7268c" path="/var/lib/kubelet/pods/aee09866-fa7a-4558-af5c-992b2ae7268c/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.089077 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-config-data" (OuterVolumeSpecName: "config-data") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.092196 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe8c746-882a-4160-b840-a00f2a2f267c" path="/var/lib/kubelet/pods/afe8c746-882a-4160-b840-a00f2a2f267c/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.108605 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3559fea-5929-4904-9be2-136f10ea1023" (UID: "c3559fea-5929-4904-9be2-136f10ea1023"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.145530 4861 scope.go:117] "RemoveContainer" containerID="5ed4b552bcd4114fdac5d241534550b1d5ddde0aea8ceaa83bb843a83f153d3e" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.145718 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.146140 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6f3661-24bc-49dd-88fe-3bcf93ea1039" path="/var/lib/kubelet/pods/df6f3661-24bc-49dd-88fe-3bcf93ea1039/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.146687 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55896ea-935d-4340-a4d6-5429eb546e83" path="/var/lib/kubelet/pods/e55896ea-935d-4340-a4d6-5429eb546e83/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.147573 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcfa3dd-97f3-4975-bb69-25a4031896a7" path="/var/lib/kubelet/pods/ffcfa3dd-97f3-4975-bb69-25a4031896a7/volumes" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.163407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d6c467cc6-ng4wh" event={"ID":"074e719c-b46b-4f91-ae2d-e7f30368a8ae","Type":"ContainerDied","Data":"776ff5a17e90ebc21d49721478bc41e7146bdd38de85dd86200078fc345273f3"} Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.163464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-679b4d4449-j6f75" event={"ID":"c3559fea-5929-4904-9be2-136f10ea1023","Type":"ContainerDied","Data":"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14"} Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.163479 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-679b4d4449-j6f75" event={"ID":"c3559fea-5929-4904-9be2-136f10ea1023","Type":"ContainerDied","Data":"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a"} Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.163487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-679b4d4449-j6f75" event={"ID":"c3559fea-5929-4904-9be2-136f10ea1023","Type":"ContainerDied","Data":"01610bb6a7e62a122f0414517568d090b81343b2906e902965fa53da6cb1417b"} Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.240593 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8bt78"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.266904 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3559fea-5929-4904-9be2-136f10ea1023-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.279046 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-8bt78"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.332331 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-znr64"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.344395 4861 scope.go:117] "RemoveContainer" containerID="9ecb2a5a0a2e89a60f5cb3038c9b421573c3c77bb1ddc2f92c091a4a5d3709ac" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.371711 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.371778 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts podName:ff11b43a-9b7c-42c8-afac-6f66908975dc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:08.371761209 +0000 UTC m=+1463.032864437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts") pod "root-account-create-update-hxvnj" (UID: "ff11b43a-9b7c-42c8-afac-6f66908975dc") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.393295 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-znr64"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.404539 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.461448 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.473087 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.492482 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.519160 4861 scope.go:117] "RemoveContainer" containerID="4ee4dd2ad3f07f52f88fc12ff7db447e44a9a47a1087d34bbe4cd277664e32c4" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.549098 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556456 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kfscq"] Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556871 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99cc42c8-6836-4d0e-8d13-6b0a44b2583f" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556882 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="99cc42c8-6836-4d0e-8d13-6b0a44b2583f" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556891 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556897 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556908 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="dnsmasq-dns" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556914 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="dnsmasq-dns" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556927 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="ovsdbserver-sb" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556935 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="ovsdbserver-sb" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556945 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="ovsdbserver-nb" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556950 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="ovsdbserver-nb" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556963 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-server" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556969 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-server" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.556978 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.556986 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.557003 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-httpd" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557008 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-httpd" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.557021 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="init" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557026 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="init" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.557038 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557044 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557200 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="ovsdbserver-nb" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557209 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="99cc42c8-6836-4d0e-8d13-6b0a44b2583f" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557222 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" containerName="dnsmasq-dns" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557231 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-httpd" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557238 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3559fea-5929-4904-9be2-136f10ea1023" containerName="proxy-server" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557247 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557255 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557263 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="ovsdbserver-sb" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557275 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" containerName="openstack-network-exporter" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.557816 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.564973 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.570046 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kfscq"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.574312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slqvd\" (UniqueName: \"kubernetes.io/projected/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-kube-api-access-slqvd\") pod \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.575595 4861 scope.go:117] "RemoveContainer" containerID="5c28b6840c393e68a345c69df7ef556c1f82a9902c2ad4d548998a391a2216a8" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.580221 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-679b4d4449-j6f75"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.584088 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-kube-api-access-slqvd" (OuterVolumeSpecName: "kube-api-access-slqvd") pod "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" (UID: "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532"). InnerVolumeSpecName "kube-api-access-slqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.598768 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-679b4d4449-j6f75"] Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.607250 4861 scope.go:117] "RemoveContainer" containerID="a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.611241 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-vencrypt-tls-certs\") pod \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.611311 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-nova-novncproxy-tls-certs\") pod \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.618228 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.623730 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-combined-ca-bundle\") pod \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.623801 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-config-data\") pod \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\" (UID: \"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.624228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvkr\" (UniqueName: \"kubernetes.io/projected/8245aba7-ec9b-4b09-a3ba-691c370db0cf-kube-api-access-8fvkr\") pod \"root-account-create-update-kfscq\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.624529 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8245aba7-ec9b-4b09-a3ba-691c370db0cf-operator-scripts\") pod \"root-account-create-update-kfscq\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.624598 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slqvd\" (UniqueName: \"kubernetes.io/projected/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-kube-api-access-slqvd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.655538 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" (UID: "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.661600 4861 scope.go:117] "RemoveContainer" containerID="2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.674637 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-config-data" (OuterVolumeSpecName: "config-data") pod "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" (UID: "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.681189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" (UID: "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.685118 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" (UID: "9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.709918 4861 scope.go:117] "RemoveContainer" containerID="a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.710895 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14\": container with ID starting with a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14 not found: ID does not exist" containerID="a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.710943 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14"} err="failed to get container status \"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14\": rpc error: code = NotFound desc = could not find container \"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14\": container with ID starting with a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14 not found: ID does not exist" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.710975 4861 scope.go:117] "RemoveContainer" containerID="2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.711939 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a\": container with ID starting with 2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a not found: ID does not exist" containerID="2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.711974 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a"} err="failed to get container status \"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a\": rpc error: code = NotFound desc = could not find container \"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a\": container with ID starting with 2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a not found: ID does not exist" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.711996 4861 scope.go:117] "RemoveContainer" containerID="a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.713058 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14"} err="failed to get container status \"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14\": rpc error: code = NotFound desc = could not find container \"a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14\": container with ID starting with a06e4f44d0e8390c8484158ea064a59374a08883feefb91477c32c4689fb3d14 not found: ID does not exist" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.713087 4861 scope.go:117] "RemoveContainer" containerID="2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.713370 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a"} err="failed to get container status \"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a\": rpc error: code = NotFound desc = could not find container \"2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a\": container with ID starting with 2b9f559ff1565336f7300a263865547c2e19ce23a755a63c72cc55c3888d6f1a not found: ID does not exist" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.725870 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkpn2\" (UniqueName: \"kubernetes.io/projected/722c1574-0ce6-4d70-87ee-da04a01d79ad-kube-api-access-lkpn2\") pod \"722c1574-0ce6-4d70-87ee-da04a01d79ad\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726046 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722c1574-0ce6-4d70-87ee-da04a01d79ad-operator-scripts\") pod \"722c1574-0ce6-4d70-87ee-da04a01d79ad\" (UID: \"722c1574-0ce6-4d70-87ee-da04a01d79ad\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726406 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvkr\" (UniqueName: \"kubernetes.io/projected/8245aba7-ec9b-4b09-a3ba-691c370db0cf-kube-api-access-8fvkr\") pod \"root-account-create-update-kfscq\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726598 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8245aba7-ec9b-4b09-a3ba-691c370db0cf-operator-scripts\") pod \"root-account-create-update-kfscq\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726689 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726707 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726722 4861 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.726734 4861 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.727678 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8245aba7-ec9b-4b09-a3ba-691c370db0cf-operator-scripts\") pod \"root-account-create-update-kfscq\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.728453 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722c1574-0ce6-4d70-87ee-da04a01d79ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "722c1574-0ce6-4d70-87ee-da04a01d79ad" (UID: "722c1574-0ce6-4d70-87ee-da04a01d79ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.732207 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722c1574-0ce6-4d70-87ee-da04a01d79ad-kube-api-access-lkpn2" (OuterVolumeSpecName: "kube-api-access-lkpn2") pod "722c1574-0ce6-4d70-87ee-da04a01d79ad" (UID: "722c1574-0ce6-4d70-87ee-da04a01d79ad"). InnerVolumeSpecName "kube-api-access-lkpn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.753093 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvkr\" (UniqueName: \"kubernetes.io/projected/8245aba7-ec9b-4b09-a3ba-691c370db0cf-kube-api-access-8fvkr\") pod \"root-account-create-update-kfscq\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.777685 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.811331 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89kqs\" (UniqueName: \"kubernetes.io/projected/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-kube-api-access-89kqs\") pod \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830131 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-operator-scripts\") pod \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830215 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-operator-scripts\") pod \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\" (UID: \"afc3eb95-9d8a-449a-937d-7db3d3dd3d69\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-594dr\" (UniqueName: \"kubernetes.io/projected/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-kube-api-access-594dr\") pod \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\" (UID: \"1eb2f892-b8f2-423d-b7fa-bed98cf7683a\") " Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830696 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkpn2\" (UniqueName: \"kubernetes.io/projected/722c1574-0ce6-4d70-87ee-da04a01d79ad-kube-api-access-lkpn2\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830713 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722c1574-0ce6-4d70-87ee-da04a01d79ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.830984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1eb2f892-b8f2-423d-b7fa-bed98cf7683a" (UID: "1eb2f892-b8f2-423d-b7fa-bed98cf7683a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.831013 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afc3eb95-9d8a-449a-937d-7db3d3dd3d69" (UID: "afc3eb95-9d8a-449a-937d-7db3d3dd3d69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.835984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-kube-api-access-89kqs" (OuterVolumeSpecName: "kube-api-access-89kqs") pod "afc3eb95-9d8a-449a-937d-7db3d3dd3d69" (UID: "afc3eb95-9d8a-449a-937d-7db3d3dd3d69"). InnerVolumeSpecName "kube-api-access-89kqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.836541 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-kube-api-access-594dr" (OuterVolumeSpecName: "kube-api-access-594dr") pod "1eb2f892-b8f2-423d-b7fa-bed98cf7683a" (UID: "1eb2f892-b8f2-423d-b7fa-bed98cf7683a"). InnerVolumeSpecName "kube-api-access-594dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.876188 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.933692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.933800 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-594dr\" (UniqueName: \"kubernetes.io/projected/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-kube-api-access-594dr\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.933815 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89kqs\" (UniqueName: \"kubernetes.io/projected/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-kube-api-access-89kqs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.933824 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb2f892-b8f2-423d-b7fa-bed98cf7683a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: I0219 13:34:06.933833 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afc3eb95-9d8a-449a-937d-7db3d3dd3d69-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.933799 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.933901 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data podName:b117524a-eaad-4666-9e0e-bda909b2ad30 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:10.933881906 +0000 UTC m=+1465.594985144 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data") pod "rabbitmq-server-0" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30") : configmap "rabbitmq-config-data" not found Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.933904 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 13:34:06 crc kubenswrapper[4861]: E0219 13:34:06.933971 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:10.933952218 +0000 UTC m=+1465.595055446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : configmap "openstack-cell1-scripts" not found Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.025990 4861 generic.go:334] "Generic (PLEG): container finished" podID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerID="362173b173d50713e668b40267cc089e3476fe407aa787eab8629d823b7bab2c" exitCode=0 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.026039 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"77e9ae58-534e-4312-8b56-9ec6708995ac","Type":"ContainerDied","Data":"362173b173d50713e668b40267cc089e3476fe407aa787eab8629d823b7bab2c"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.027888 4861 generic.go:334] "Generic (PLEG): container finished" podID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerID="732b4cff15c44d4403f193f10fcf2bd5b6f0bb23a75a23786912b65d5492fa98" exitCode=1 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.027924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxvnj" event={"ID":"ff11b43a-9b7c-42c8-afac-6f66908975dc","Type":"ContainerDied","Data":"732b4cff15c44d4403f193f10fcf2bd5b6f0bb23a75a23786912b65d5492fa98"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.027947 4861 scope.go:117] "RemoveContainer" containerID="664e37dab2fb0274c430251d7675644e7a1659b00349bf1be78d0e03e076d739" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.032928 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae68-account-create-update-xbl4c" event={"ID":"1eb2f892-b8f2-423d-b7fa-bed98cf7683a","Type":"ContainerDied","Data":"9450544c5af47e61c80ff25007ec72dff701ac458c7f36413cf2ead45f08a8e9"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.032988 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae68-account-create-update-xbl4c" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.036760 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhsh\" (UniqueName: \"kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh\") pod \"nova-cell1-6dfc-account-create-update-c6fh8\" (UID: \"13af2f92-2b45-4bca-925f-91e0d4102a56\") " pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.042116 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.042749 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532","Type":"ContainerDied","Data":"96991be9cc29d3f2467ec29101612fcc2720daf6748a4aa099802361c1822b8b"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.042807 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:34:07 crc kubenswrapper[4861]: E0219 13:34:07.042822 4861 projected.go:194] Error preparing data for projected volume kube-api-access-bmhsh for pod openstack/nova-cell1-6dfc-account-create-update-c6fh8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:07 crc kubenswrapper[4861]: E0219 13:34:07.042878 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh podName:13af2f92-2b45-4bca-925f-91e0d4102a56 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:11.042860507 +0000 UTC m=+1465.703963735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bmhsh" (UniqueName: "kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh") pod "nova-cell1-6dfc-account-create-update-c6fh8" (UID: "13af2f92-2b45-4bca-925f-91e0d4102a56") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.051217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81eb-account-create-update-g7j6p" event={"ID":"afc3eb95-9d8a-449a-937d-7db3d3dd3d69","Type":"ContainerDied","Data":"3ff6b769d3d07f03d352a6b7158708a709a0f425884599355ea85adb2648a1bc"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.051292 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81eb-account-create-update-g7j6p" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.061404 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9594-account-create-update-bzndk" event={"ID":"722c1574-0ce6-4d70-87ee-da04a01d79ad","Type":"ContainerDied","Data":"b415b38f2eb82a4005ccecd42dd86d6fc9ed92633fb82e8c700502b66ab77a64"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.061488 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9594-account-create-update-bzndk" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.070136 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b8e-account-create-update-5mwlh" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.070155 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b8e-account-create-update-5mwlh" event={"ID":"bd101633-19ce-4277-8ab9-b19319febd08","Type":"ContainerDied","Data":"090182daad27a162548d65b8749454c4afc1537264b0c85cb5217fee670e43c5"} Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.072565 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6dfc-account-create-update-c6fh8" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.138205 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bcdd\" (UniqueName: \"kubernetes.io/projected/bd101633-19ce-4277-8ab9-b19319febd08-kube-api-access-5bcdd\") pod \"bd101633-19ce-4277-8ab9-b19319febd08\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.138610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd101633-19ce-4277-8ab9-b19319febd08-operator-scripts\") pod \"bd101633-19ce-4277-8ab9-b19319febd08\" (UID: \"bd101633-19ce-4277-8ab9-b19319febd08\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.139301 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd101633-19ce-4277-8ab9-b19319febd08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd101633-19ce-4277-8ab9-b19319febd08" (UID: "bd101633-19ce-4277-8ab9-b19319febd08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.144689 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd101633-19ce-4277-8ab9-b19319febd08-kube-api-access-5bcdd" (OuterVolumeSpecName: "kube-api-access-5bcdd") pod "bd101633-19ce-4277-8ab9-b19319febd08" (UID: "bd101633-19ce-4277-8ab9-b19319febd08"). InnerVolumeSpecName "kube-api-access-5bcdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.147175 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.156449 4861 scope.go:117] "RemoveContainer" containerID="531ccdafb0d0612a0c1e667d57b0657cf8216937bc4a152030d92ea096cbe11a" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.172178 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.193152 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.200074 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.218929 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-bzndk"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.226780 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9594-account-create-update-bzndk"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.232692 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.244890 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-operator-scripts\") pod \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.244951 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26hw\" (UniqueName: \"kubernetes.io/projected/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-kube-api-access-r26hw\") pod \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\" (UID: \"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.245092 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-operator-scripts\") pod \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.245120 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mrv\" (UniqueName: \"kubernetes.io/projected/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-kube-api-access-v6mrv\") pod \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\" (UID: \"3aca02ec-903d-4ddd-a7df-25d323ed6dc1\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.245865 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bcdd\" (UniqueName: \"kubernetes.io/projected/bd101633-19ce-4277-8ab9-b19319febd08-kube-api-access-5bcdd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.245882 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd101633-19ce-4277-8ab9-b19319febd08-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.248627 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.179:9292/healthcheck\": read tcp 10.217.0.2:36002->10.217.0.179:9292: read: connection reset by peer" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.249000 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.179:9292/healthcheck\": read tcp 10.217.0.2:35994->10.217.0.179:9292: read: connection reset by peer" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.255486 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30a1e51d-a60d-4f7f-8300-9ef99e3da2a6" (UID: "30a1e51d-a60d-4f7f-8300-9ef99e3da2a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.255828 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aca02ec-903d-4ddd-a7df-25d323ed6dc1" (UID: "3aca02ec-903d-4ddd-a7df-25d323ed6dc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.259717 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-kube-api-access-r26hw" (OuterVolumeSpecName: "kube-api-access-r26hw") pod "30a1e51d-a60d-4f7f-8300-9ef99e3da2a6" (UID: "30a1e51d-a60d-4f7f-8300-9ef99e3da2a6"). InnerVolumeSpecName "kube-api-access-r26hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.261548 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-kube-api-access-v6mrv" (OuterVolumeSpecName: "kube-api-access-v6mrv") pod "3aca02ec-903d-4ddd-a7df-25d323ed6dc1" (UID: "3aca02ec-903d-4ddd-a7df-25d323ed6dc1"). InnerVolumeSpecName "kube-api-access-v6mrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.323549 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.323636 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ae68-account-create-update-xbl4c"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.323941 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-central-agent" containerID="cri-o://2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.324112 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="proxy-httpd" containerID="cri-o://3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.324157 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="sg-core" containerID="cri-o://96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.324195 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-notification-agent" containerID="cri-o://9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.370075 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ae68-account-create-update-xbl4c"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.376540 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26hw\" (UniqueName: \"kubernetes.io/projected/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-kube-api-access-r26hw\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.376583 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.376595 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6mrv\" (UniqueName: \"kubernetes.io/projected/3aca02ec-903d-4ddd-a7df-25d323ed6dc1-kube-api-access-v6mrv\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.376614 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.411957 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.412636 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="26816cde-a8b6-41a2-ab12-46f8aeebbb0d" containerName="kube-state-metrics" containerID="cri-o://2440c29ebc8715777e8f388a09da85afbe4b4ab4c17d69eb424c41362f7ff115" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.498296 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdhrq\" (UniqueName: \"kubernetes.io/projected/ff11b43a-9b7c-42c8-afac-6f66908975dc-kube-api-access-qdhrq\") pod \"ff11b43a-9b7c-42c8-afac-6f66908975dc\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.498435 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts\") pod \"ff11b43a-9b7c-42c8-afac-6f66908975dc\" (UID: \"ff11b43a-9b7c-42c8-afac-6f66908975dc\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.514698 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff11b43a-9b7c-42c8-afac-6f66908975dc" (UID: "ff11b43a-9b7c-42c8-afac-6f66908975dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.535996 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff11b43a-9b7c-42c8-afac-6f66908975dc-kube-api-access-qdhrq" (OuterVolumeSpecName: "kube-api-access-qdhrq") pod "ff11b43a-9b7c-42c8-afac-6f66908975dc" (UID: "ff11b43a-9b7c-42c8-afac-6f66908975dc"). InnerVolumeSpecName "kube-api-access-qdhrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.546022 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81eb-account-create-update-g7j6p"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.605436 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff11b43a-9b7c-42c8-afac-6f66908975dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.605472 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdhrq\" (UniqueName: \"kubernetes.io/projected/ff11b43a-9b7c-42c8-afac-6f66908975dc-kube-api-access-qdhrq\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.615878 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": read tcp 10.217.0.2:45418->10.217.0.175:8776: read: connection reset by peer" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.652938 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-81eb-account-create-update-g7j6p"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.686175 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.686923 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" containerName="memcached" containerID="cri-o://ad8511fcc645d0be3764a7766fb81bdc9a4303a984191f45091f54cbd03e02b4" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.697435 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-39dc-account-create-update-vhqdr"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.706233 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-39dc-account-create-update-vhqdr"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.714109 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-39dc-account-create-update-wxznp"] Feb 19 13:34:07 crc kubenswrapper[4861]: E0219 13:34:07.714713 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerName="mariadb-account-create-update" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.714798 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerName="mariadb-account-create-update" Feb 19 13:34:07 crc kubenswrapper[4861]: E0219 13:34:07.714897 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerName="mariadb-account-create-update" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.714996 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerName="mariadb-account-create-update" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.715261 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerName="mariadb-account-create-update" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.715346 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" containerName="mariadb-account-create-update" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.717386 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.717865 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.720622 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.721881 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wlwdw"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.727646 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wlwdw"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.737234 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-c6fh8"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.746651 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xs25z"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.753221 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6dfc-account-create-update-c6fh8"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.765282 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xs25z"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.780866 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-39dc-account-create-update-wxznp"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.785679 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6c9494f487-fzm28"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.786150 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6c9494f487-fzm28" podUID="382166c8-355e-407b-9721-3eee34966095" containerName="keystone-api" containerID="cri-o://734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d" gracePeriod=30 Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.789545 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.802866 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-39dc-account-create-update-wxznp"] Feb 19 13:34:07 crc kubenswrapper[4861]: E0219 13:34:07.805937 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-qfkb2 operator-scripts], unattached volumes=[], failed to process volumes=[kube-api-access-qfkb2 operator-scripts]: context canceled" pod="openstack/keystone-39dc-account-create-update-wxznp" podUID="33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.811614 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.819870 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zhbfq"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.829297 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zhbfq"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.850569 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kfscq"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.883086 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5b8e-account-create-update-5mwlh"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.895728 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5b8e-account-create-update-5mwlh"] Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910606 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-default\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910718 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfhpm\" (UniqueName: \"kubernetes.io/projected/77e9ae58-534e-4312-8b56-9ec6708995ac-kube-api-access-wfhpm\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910776 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-operator-scripts\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910857 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910880 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-kolla-config\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910943 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-combined-ca-bundle\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.910986 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4bv\" (UniqueName: \"kubernetes.io/projected/279a265d-0cc8-45af-82ba-b8a485796fae-kube-api-access-bf4bv\") pod \"279a265d-0cc8-45af-82ba-b8a485796fae\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911023 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-galera-tls-certs\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911071 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-generated\") pod \"77e9ae58-534e-4312-8b56-9ec6708995ac\" (UID: \"77e9ae58-534e-4312-8b56-9ec6708995ac\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911097 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/279a265d-0cc8-45af-82ba-b8a485796fae-operator-scripts\") pod \"279a265d-0cc8-45af-82ba-b8a485796fae\" (UID: \"279a265d-0cc8-45af-82ba-b8a485796fae\") " Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911378 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911499 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkb2\" (UniqueName: \"kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911654 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmhsh\" (UniqueName: \"kubernetes.io/projected/13af2f92-2b45-4bca-925f-91e0d4102a56-kube-api-access-bmhsh\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.911668 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13af2f92-2b45-4bca-925f-91e0d4102a56-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.912995 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.914315 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279a265d-0cc8-45af-82ba-b8a485796fae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "279a265d-0cc8-45af-82ba-b8a485796fae" (UID: "279a265d-0cc8-45af-82ba-b8a485796fae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.914368 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.914760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.915632 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.938530 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e9ae58-534e-4312-8b56-9ec6708995ac-kube-api-access-wfhpm" (OuterVolumeSpecName: "kube-api-access-wfhpm") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "kube-api-access-wfhpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.959732 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279a265d-0cc8-45af-82ba-b8a485796fae-kube-api-access-bf4bv" (OuterVolumeSpecName: "kube-api-access-bf4bv") pod "279a265d-0cc8-45af-82ba-b8a485796fae" (UID: "279a265d-0cc8-45af-82ba-b8a485796fae"). InnerVolumeSpecName "kube-api-access-bf4bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.967588 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.985195 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.992639 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13af2f92-2b45-4bca-925f-91e0d4102a56" path="/var/lib/kubelet/pods/13af2f92-2b45-4bca-925f-91e0d4102a56/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.993606 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb2f892-b8f2-423d-b7fa-bed98cf7683a" path="/var/lib/kubelet/pods/1eb2f892-b8f2-423d-b7fa-bed98cf7683a/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.994119 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29995010-b4b2-4d35-95ed-8a7205e9228b" path="/var/lib/kubelet/pods/29995010-b4b2-4d35-95ed-8a7205e9228b/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.995237 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c645ced-1599-4f62-ab9b-0e109a7e02c3" path="/var/lib/kubelet/pods/2c645ced-1599-4f62-ab9b-0e109a7e02c3/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.996756 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ba0af6-56e4-4dbd-850d-d019908adf08" path="/var/lib/kubelet/pods/57ba0af6-56e4-4dbd-850d-d019908adf08/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.997611 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722c1574-0ce6-4d70-87ee-da04a01d79ad" path="/var/lib/kubelet/pods/722c1574-0ce6-4d70-87ee-da04a01d79ad/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.998160 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4affa6-9b49-416a-9887-fdffab32916c" path="/var/lib/kubelet/pods/8a4affa6-9b49-416a-9887-fdffab32916c/volumes" Feb 19 13:34:07 crc kubenswrapper[4861]: I0219 13:34:07.999548 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99cc42c8-6836-4d0e-8d13-6b0a44b2583f" path="/var/lib/kubelet/pods/99cc42c8-6836-4d0e-8d13-6b0a44b2583f/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.001526 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532" path="/var/lib/kubelet/pods/9f0bdae2-ebf0-4f9d-a9af-d1b40f8d5532/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.002061 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc3eb95-9d8a-449a-937d-7db3d3dd3d69" path="/var/lib/kubelet/pods/afc3eb95-9d8a-449a-937d-7db3d3dd3d69/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.002658 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd101633-19ce-4277-8ab9-b19319febd08" path="/var/lib/kubelet/pods/bd101633-19ce-4277-8ab9-b19319febd08/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.004643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77e9ae58-534e-4312-8b56-9ec6708995ac" (UID: "77e9ae58-534e-4312-8b56-9ec6708995ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.006280 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3559fea-5929-4904-9be2-136f10ea1023" path="/var/lib/kubelet/pods/c3559fea-5929-4904-9be2-136f10ea1023/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.007205 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0" path="/var/lib/kubelet/pods/e8feb0ac-449f-4cd5-9cb9-f497fb09f8e0/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.007762 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07235b0-c2aa-4225-93e7-a408f7317082" path="/var/lib/kubelet/pods/f07235b0-c2aa-4225-93e7-a408f7317082/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.010711 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5cf0370-783d-41f1-9b31-259d7725b892" path="/var/lib/kubelet/pods/f5cf0370-783d-41f1-9b31-259d7725b892/volumes" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013194 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013292 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkb2\" (UniqueName: \"kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013392 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfhpm\" (UniqueName: \"kubernetes.io/projected/77e9ae58-534e-4312-8b56-9ec6708995ac-kube-api-access-wfhpm\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013405 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013439 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013450 4861 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013459 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013468 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4bv\" (UniqueName: \"kubernetes.io/projected/279a265d-0cc8-45af-82ba-b8a485796fae-kube-api-access-bf4bv\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013476 4861 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e9ae58-534e-4312-8b56-9ec6708995ac-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013485 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013497 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/279a265d-0cc8-45af-82ba-b8a485796fae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.013506 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77e9ae58-534e-4312-8b56-9ec6708995ac-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.014205 4861 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.014299 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts podName:33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:08.514275024 +0000 UTC m=+1463.175378252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts") pod "keystone-39dc-account-create-update-wxznp" (UID: "33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc") : configmap "openstack-scripts" not found Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.019130 4861 projected.go:194] Error preparing data for projected volume kube-api-access-qfkb2 for pod openstack/keystone-39dc-account-create-update-wxznp: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.019224 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2 podName:33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:08.519202187 +0000 UTC m=+1463.180305415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qfkb2" (UniqueName: "kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2") pod "keystone-39dc-account-create-update-wxznp" (UID: "33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.026737 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerName="galera" containerID="cri-o://98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324" gracePeriod=30 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.052531 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.108743 4861 generic.go:334] "Generic (PLEG): container finished" podID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerID="87675e94528e8f6860c18ad3e351c725b983857e00216b5245b6f315c839cf6f" exitCode=0 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.108991 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bce14944-29de-44e7-9ad4-bb056cc6d656","Type":"ContainerDied","Data":"87675e94528e8f6860c18ad3e351c725b983857e00216b5245b6f315c839cf6f"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.118192 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kfscq"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.121409 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.123707 4861 generic.go:334] "Generic (PLEG): container finished" podID="26816cde-a8b6-41a2-ab12-46f8aeebbb0d" containerID="2440c29ebc8715777e8f388a09da85afbe4b4ab4c17d69eb424c41362f7ff115" exitCode=2 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.123762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26816cde-a8b6-41a2-ab12-46f8aeebbb0d","Type":"ContainerDied","Data":"2440c29ebc8715777e8f388a09da85afbe4b4ab4c17d69eb424c41362f7ff115"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.126966 4861 generic.go:334] "Generic (PLEG): container finished" podID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerID="3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6" exitCode=0 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.126984 4861 generic.go:334] "Generic (PLEG): container finished" podID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerID="96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84" exitCode=2 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.126993 4861 generic.go:334] "Generic (PLEG): container finished" podID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerID="2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468" exitCode=0 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.127034 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerDied","Data":"3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.127048 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerDied","Data":"96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.127058 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerDied","Data":"2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.128402 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b129-account-create-update-dpq4s" event={"ID":"30a1e51d-a60d-4f7f-8300-9ef99e3da2a6","Type":"ContainerDied","Data":"a08803a521a4310cbea351afd5bf97a7a21ab8d212eacae3860716347a664ebc"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.128824 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b129-account-create-update-dpq4s" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.139238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hxvnj" event={"ID":"ff11b43a-9b7c-42c8-afac-6f66908975dc","Type":"ContainerDied","Data":"5c82cea8953d58d2faeed28c754ecc5b9c2f27d71b990cffa6cb46748abd6d56"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.139373 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hxvnj" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.139384 4861 scope.go:117] "RemoveContainer" containerID="732b4cff15c44d4403f193f10fcf2bd5b6f0bb23a75a23786912b65d5492fa98" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.142101 4861 generic.go:334] "Generic (PLEG): container finished" podID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerID="17fcee271c2a499b801142f1f8bd906a26d2c54f3ca073b5f9002a5871100c7a" exitCode=0 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.142193 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6799fd8d6-p6tpl" event={"ID":"46d0ac5c-1d20-4b80-be1b-21ad2641b215","Type":"ContainerDied","Data":"17fcee271c2a499b801142f1f8bd906a26d2c54f3ca073b5f9002a5871100c7a"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.148158 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b86e-account-create-update-mp5sk" event={"ID":"3aca02ec-903d-4ddd-a7df-25d323ed6dc1","Type":"ContainerDied","Data":"b39eb0baa81572d0890054255fb8e31771e7a7a69f4994059a646e592bbedfc6"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.148317 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b86e-account-create-update-mp5sk" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.170230 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd89-account-create-update-bnqdw" event={"ID":"279a265d-0cc8-45af-82ba-b8a485796fae","Type":"ContainerDied","Data":"cd6db58d45893f044a771b75c49a1fff9b6d49879fb68fa4936a3c7d3c633863"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.170527 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd89-account-create-update-bnqdw" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.199370 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b129-account-create-update-dpq4s"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.205652 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b129-account-create-update-dpq4s"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.213034 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hxvnj"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.217287 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"77e9ae58-534e-4312-8b56-9ec6708995ac","Type":"ContainerDied","Data":"584a68e70a2756c2c1678bfc7cb30681d2302a97248f3904198716c0a85800fb"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.217400 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.221042 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": read tcp 10.217.0.2:34044->10.217.0.210:8775: read: connection reset by peer" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.221305 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": read tcp 10.217.0.2:34050->10.217.0.210:8775: read: connection reset by peer" Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.223683 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 13:34:08 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 13:34:08 crc kubenswrapper[4861]: Feb 19 13:34:08 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 13:34:08 crc kubenswrapper[4861]: Feb 19 13:34:08 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 13:34:08 crc kubenswrapper[4861]: Feb 19 13:34:08 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 13:34:08 crc kubenswrapper[4861]: Feb 19 13:34:08 crc kubenswrapper[4861]: if [ -n "" ]; then Feb 19 13:34:08 crc kubenswrapper[4861]: GRANT_DATABASE="" Feb 19 13:34:08 crc kubenswrapper[4861]: else Feb 19 13:34:08 crc kubenswrapper[4861]: GRANT_DATABASE="*" Feb 19 13:34:08 crc kubenswrapper[4861]: fi Feb 19 13:34:08 crc kubenswrapper[4861]: Feb 19 13:34:08 crc kubenswrapper[4861]: # going for maximum compatibility here: Feb 19 13:34:08 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 13:34:08 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 13:34:08 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 13:34:08 crc kubenswrapper[4861]: # support updates Feb 19 13:34:08 crc kubenswrapper[4861]: Feb 19 13:34:08 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.224793 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-kfscq" podUID="8245aba7-ec9b-4b09-a3ba-691c370db0cf" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.249147 4861 scope.go:117] "RemoveContainer" containerID="362173b173d50713e668b40267cc089e3476fe407aa787eab8629d823b7bab2c" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.250616 4861 generic.go:334] "Generic (PLEG): container finished" podID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerID="d730aafec31ebf1d1d4d0bbbdd71e711bc2fd55423001647b8861204d4936465" exitCode=0 Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.250789 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b863561a-440f-4e92-a8f3-4786a24d0a5f","Type":"ContainerDied","Data":"d730aafec31ebf1d1d4d0bbbdd71e711bc2fd55423001647b8861204d4936465"} Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.293532 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f is running failed: container process not found" containerID="571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.295078 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f is running failed: container process not found" containerID="571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.295575 4861 generic.go:334] "Generic (PLEG): container finished" podID="1da21583-02a3-4a99-a05c-976f017fb31c" containerID="cf188110f03d910f2a512942393ddfa01853575e4682c8b6c95037df3b2b616f" exitCode=0 Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.297693 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f is running failed: container process not found" containerID="571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.297772 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" containerName="nova-cell1-conductor-conductor" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.297869 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.298091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da21583-02a3-4a99-a05c-976f017fb31c","Type":"ContainerDied","Data":"cf188110f03d910f2a512942393ddfa01853575e4682c8b6c95037df3b2b616f"} Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.348471 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hxvnj"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.383807 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b86e-account-create-update-mp5sk"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.389492 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b86e-account-create-update-mp5sk"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.400094 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cd89-account-create-update-bnqdw"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.447373 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cd89-account-create-update-bnqdw"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.549994 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.550092 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkb2\" (UniqueName: \"kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.550297 4861 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.550374 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts podName:33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:09.550347357 +0000 UTC m=+1464.211450585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts") pod "keystone-39dc-account-create-update-wxznp" (UID: "33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc") : configmap "openstack-scripts" not found Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.553345 4861 projected.go:194] Error preparing data for projected volume kube-api-access-qfkb2 for pod openstack/keystone-39dc-account-create-update-wxznp: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 13:34:08 crc kubenswrapper[4861]: E0219 13:34:08.553440 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2 podName:33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:09.553399559 +0000 UTC m=+1464.214502787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qfkb2" (UniqueName: "kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2") pod "keystone-39dc-account-create-update-wxznp" (UID: "33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.808700 4861 scope.go:117] "RemoveContainer" containerID="c8f977cef3638a4d742a835ffdc3bb1d6e0f1071f24a76888be477940c78db5d" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.822698 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.845509 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.909470 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.956737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.956858 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-logs\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.956910 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-scripts\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.956935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-config-data\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.956969 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-httpd-run\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.957012 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-combined-ca-bundle\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.957033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-public-tls-certs\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.957053 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gtqc\" (UniqueName: \"kubernetes.io/projected/bce14944-29de-44e7-9ad4-bb056cc6d656-kube-api-access-5gtqc\") pod \"bce14944-29de-44e7-9ad4-bb056cc6d656\" (UID: \"bce14944-29de-44e7-9ad4-bb056cc6d656\") " Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.961856 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-logs" (OuterVolumeSpecName: "logs") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.964599 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.975711 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce14944-29de-44e7-9ad4-bb056cc6d656-kube-api-access-5gtqc" (OuterVolumeSpecName: "kube-api-access-5gtqc") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "kube-api-access-5gtqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.975830 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:08 crc kubenswrapper[4861]: I0219 13:34:08.975908 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-scripts" (OuterVolumeSpecName: "scripts") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.029628 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.031136 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.032614 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.032647 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9211a2d8-8917-464d-a790-efc469302556" containerName="nova-scheduler-scheduler" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.039643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.059102 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.059135 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.059143 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bce14944-29de-44e7-9ad4-bb056cc6d656-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.059152 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.059163 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gtqc\" (UniqueName: \"kubernetes.io/projected/bce14944-29de-44e7-9ad4-bb056cc6d656-kube-api-access-5gtqc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.059192 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.079315 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.081365 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.090225 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-config-data" (OuterVolumeSpecName: "config-data") pod "bce14944-29de-44e7-9ad4-bb056cc6d656" (UID: "bce14944-29de-44e7-9ad4-bb056cc6d656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.162219 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.162261 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.162276 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce14944-29de-44e7-9ad4-bb056cc6d656-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.292225 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6 is running failed: container process not found" containerID="37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.292870 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6 is running failed: container process not found" containerID="37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.293197 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6 is running failed: container process not found" containerID="37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.293252 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="ovn-northd" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.314040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6799fd8d6-p6tpl" event={"ID":"46d0ac5c-1d20-4b80-be1b-21ad2641b215","Type":"ContainerDied","Data":"12c6e860ab8a7e9b8288e87ffad8c5d530e948c3938e5ea126ae27dd896f46d7"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.314088 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c6e860ab8a7e9b8288e87ffad8c5d530e948c3938e5ea126ae27dd896f46d7" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.320822 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" event={"ID":"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef","Type":"ContainerDied","Data":"1d619313cf3eb9116f4f061ab19e9d256b6f4c3706035768630ec087a8ab9bd7"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.320688 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerID="1d619313cf3eb9116f4f061ab19e9d256b6f4c3706035768630ec087a8ab9bd7" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.323454 4861 generic.go:334] "Generic (PLEG): container finished" podID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerID="7955b99165b940ef4462fa1d533ec8daadebf57e1422d1ab3180cb3a66fc27cb" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.323569 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"707836a9-478e-4110-b5f5-9ee7e6b46e21","Type":"ContainerDied","Data":"7955b99165b940ef4462fa1d533ec8daadebf57e1422d1ab3180cb3a66fc27cb"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.325751 4861 generic.go:334] "Generic (PLEG): container finished" podID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerID="851c26a783d4f2fb239877063c2d4732d081998faf87a9a0897c6af79d389cda" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.325819 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d6c467cc6-ng4wh" event={"ID":"074e719c-b46b-4f91-ae2d-e7f30368a8ae","Type":"ContainerDied","Data":"851c26a783d4f2fb239877063c2d4732d081998faf87a9a0897c6af79d389cda"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.325840 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d6c467cc6-ng4wh" event={"ID":"074e719c-b46b-4f91-ae2d-e7f30368a8ae","Type":"ContainerDied","Data":"90fc7f8f56b065d37a3d8b95fed12c8241808ee1000d5c6034774c368d36148a"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.325871 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90fc7f8f56b065d37a3d8b95fed12c8241808ee1000d5c6034774c368d36148a" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.328174 4861 generic.go:334] "Generic (PLEG): container finished" podID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerID="fc0693a9e1476f2b6d033af8f56ce772a2fac61eb55bf48764b0906664653ac4" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.328204 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" event={"ID":"a4307ff9-78bb-48ec-8096-6e06ff22e19b","Type":"ContainerDied","Data":"fc0693a9e1476f2b6d033af8f56ce772a2fac61eb55bf48764b0906664653ac4"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.336073 4861 generic.go:334] "Generic (PLEG): container finished" podID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerID="d751c5da783be93739c9cde1c6a879f363a6be171c0437955267e7fbe56355ab" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.336156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07533556-6a9f-4844-be7d-f9c9cf8c53a4","Type":"ContainerDied","Data":"d751c5da783be93739c9cde1c6a879f363a6be171c0437955267e7fbe56355ab"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.336186 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07533556-6a9f-4844-be7d-f9c9cf8c53a4","Type":"ContainerDied","Data":"69706adfca530a88e2e664cc0cda132903e4a72603bd642b34361ec4ade81785"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.336201 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69706adfca530a88e2e664cc0cda132903e4a72603bd642b34361ec4ade81785" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.363886 4861 generic.go:334] "Generic (PLEG): container finished" podID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" containerID="ad8511fcc645d0be3764a7766fb81bdc9a4303a984191f45091f54cbd03e02b4" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.363998 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f079df7d-6aa6-4eab-8a9a-3b4bc329f139","Type":"ContainerDied","Data":"ad8511fcc645d0be3764a7766fb81bdc9a4303a984191f45091f54cbd03e02b4"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.364084 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f079df7d-6aa6-4eab-8a9a-3b4bc329f139","Type":"ContainerDied","Data":"6bcb8cd776167ad0a33fa07775206ef7def4af61dd4bd9d345374e0532f7f2d3"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.364106 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcb8cd776167ad0a33fa07775206ef7def4af61dd4bd9d345374e0532f7f2d3" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.369117 4861 generic.go:334] "Generic (PLEG): container finished" podID="c26363be-cfa7-49f5-82a2-709c67b44622" containerID="b32d612f73bbe86a4cf54136585330c0e0f86c939e661fda48a99f88a3862277" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.369223 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c26363be-cfa7-49f5-82a2-709c67b44622","Type":"ContainerDied","Data":"b32d612f73bbe86a4cf54136585330c0e0f86c939e661fda48a99f88a3862277"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.380140 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kfscq" event={"ID":"8245aba7-ec9b-4b09-a3ba-691c370db0cf","Type":"ContainerStarted","Data":"7b4a1d0b8abb14480babc36a762044e9e537e0dbc75360e7ce58df78618a7193"} Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.382687 4861 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.400088 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.403350 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.403452 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.404660 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b863561a-440f-4e92-a8f3-4786a24d0a5f","Type":"ContainerDied","Data":"1764f4e8bd03c92dac341d90c365de2ad0abb7ef260faf7b324ad9a4fac67542"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.404809 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1764f4e8bd03c92dac341d90c365de2ad0abb7ef260faf7b324ad9a4fac67542" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.408503 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.408638 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.408661 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.412707 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.412782 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.412899 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.417459 4861 generic.go:334] "Generic (PLEG): container finished" podID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerID="55e9e83bff1da6a4f3c4c60dbe202de73f4077183db64ccdd0ed5fa347035067" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.417610 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a","Type":"ContainerDied","Data":"55e9e83bff1da6a4f3c4c60dbe202de73f4077183db64ccdd0ed5fa347035067"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.418978 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.420239 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ee6ab7-feb7-4dbd-881a-b8250652aef9/ovn-northd/0.log" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.420272 4861 generic.go:334] "Generic (PLEG): container finished" podID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerID="37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6" exitCode=139 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.420315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ee6ab7-feb7-4dbd-881a-b8250652aef9","Type":"ContainerDied","Data":"37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.434064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da21583-02a3-4a99-a05c-976f017fb31c","Type":"ContainerDied","Data":"ee863a1e4655f1635d841df5adf694e972df1bca70fb37cec46e268037b5e258"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.434107 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee863a1e4655f1635d841df5adf694e972df1bca70fb37cec46e268037b5e258" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.461064 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.464872 4861 generic.go:334] "Generic (PLEG): container finished" podID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" containerID="571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.464932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245","Type":"ContainerDied","Data":"571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.464961 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245","Type":"ContainerDied","Data":"e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.464972 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.472547 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.489209 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.489225 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.511751 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.512567 4861 generic.go:334] "Generic (PLEG): container finished" podID="9211a2d8-8917-464d-a790-efc469302556" containerID="51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0" exitCode=0 Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.512618 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9211a2d8-8917-464d-a790-efc469302556","Type":"ContainerDied","Data":"51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.513142 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.520501 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.521153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bce14944-29de-44e7-9ad4-bb056cc6d656","Type":"ContainerDied","Data":"7d214cfacb8e7e0527d5e12ab0c121c203d401295edb904324813230f0ea46b9"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.521195 4861 scope.go:117] "RemoveContainer" containerID="87675e94528e8f6860c18ad3e351c725b983857e00216b5245b6f315c839cf6f" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.521338 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.542786 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26816cde-a8b6-41a2-ab12-46f8aeebbb0d","Type":"ContainerDied","Data":"7cd39315016fcfc501940883ccb5c45657e85891f3e447edd02bda2d924716d5"} Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.542912 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.569942 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-internal-tls-certs\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570327 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-config\") pod \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570359 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44q94\" (UniqueName: \"kubernetes.io/projected/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-api-access-44q94\") pod \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570389 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-scripts\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570434 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-combined-ca-bundle\") pod \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570466 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-public-tls-certs\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570497 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-combined-ca-bundle\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570522 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-public-tls-certs\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570581 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570645 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-certs\") pod \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\" (UID: \"26816cde-a8b6-41a2-ab12-46f8aeebbb0d\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570695 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46d0ac5c-1d20-4b80-be1b-21ad2641b215-logs\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570751 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-config-data\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570786 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7flwz\" (UniqueName: \"kubernetes.io/projected/46d0ac5c-1d20-4b80-be1b-21ad2641b215-kube-api-access-7flwz\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570809 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570854 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-internal-tls-certs\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570877 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-scripts\") pod \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\" (UID: \"46d0ac5c-1d20-4b80-be1b-21ad2641b215\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.570924 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b863561a-440f-4e92-a8f3-4786a24d0a5f-etc-machine-id\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.571002 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b863561a-440f-4e92-a8f3-4786a24d0a5f-logs\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.571054 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/b863561a-440f-4e92-a8f3-4786a24d0a5f-kube-api-access-vbsmr\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.571078 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data-custom\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.571488 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.571581 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkb2\" (UniqueName: \"kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2\") pod \"keystone-39dc-account-create-update-wxznp\" (UID: \"33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc\") " pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.575770 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d0ac5c-1d20-4b80-be1b-21ad2641b215-logs" (OuterVolumeSpecName: "logs") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.575847 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b863561a-440f-4e92-a8f3-4786a24d0a5f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.577149 4861 scope.go:117] "RemoveContainer" containerID="a630b33298ce3a0c3f99e814f4e69c3048e04eafde31a67dbaedf03ba600019a" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.577347 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.577373 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b863561a-440f-4e92-a8f3-4786a24d0a5f-logs" (OuterVolumeSpecName: "logs") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.577797 4861 projected.go:194] Error preparing data for projected volume kube-api-access-qfkb2 for pod openstack/keystone-39dc-account-create-update-wxznp: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.577864 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2 podName:33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:11.577843567 +0000 UTC m=+1466.238946795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qfkb2" (UniqueName: "kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2") pod "keystone-39dc-account-create-update-wxznp" (UID: "33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.579543 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.579818 4861 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.579889 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts podName:33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc nodeName:}" failed. No retries permitted until 2026-02-19 13:34:11.579866111 +0000 UTC m=+1466.240969349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts") pod "keystone-39dc-account-create-update-wxznp" (UID: "33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc") : configmap "openstack-scripts" not found Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.586607 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-scripts" (OuterVolumeSpecName: "scripts") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.586630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-api-access-44q94" (OuterVolumeSpecName: "kube-api-access-44q94") pod "26816cde-a8b6-41a2-ab12-46f8aeebbb0d" (UID: "26816cde-a8b6-41a2-ab12-46f8aeebbb0d"). InnerVolumeSpecName "kube-api-access-44q94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.593373 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d0ac5c-1d20-4b80-be1b-21ad2641b215-kube-api-access-7flwz" (OuterVolumeSpecName: "kube-api-access-7flwz") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "kube-api-access-7flwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.596560 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.598767 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ee6ab7-feb7-4dbd-881a-b8250652aef9/ovn-northd/0.log" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.598849 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.602830 4861 scope.go:117] "RemoveContainer" containerID="2440c29ebc8715777e8f388a09da85afbe4b4ab4c17d69eb424c41362f7ff115" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.613625 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-scripts" (OuterVolumeSpecName: "scripts") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.617282 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b863561a-440f-4e92-a8f3-4786a24d0a5f-kube-api-access-vbsmr" (OuterVolumeSpecName: "kube-api-access-vbsmr") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "kube-api-access-vbsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.662186 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.672311 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-combined-ca-bundle\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.672367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-internal-tls-certs\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.672457 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-config-data\") pod \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.672479 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-combined-ca-bundle\") pod \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673768 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nj6w\" (UniqueName: \"kubernetes.io/projected/07533556-6a9f-4844-be7d-f9c9cf8c53a4-kube-api-access-6nj6w\") pod \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673806 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kolla-config\") pod \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data\") pod \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673847 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-998lv\" (UniqueName: \"kubernetes.io/projected/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-kube-api-access-998lv\") pod \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-internal-tls-certs\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673895 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs78t\" (UniqueName: \"kubernetes.io/projected/c26363be-cfa7-49f5-82a2-709c67b44622-kube-api-access-hs78t\") pod \"c26363be-cfa7-49f5-82a2-709c67b44622\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673929 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data-custom\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673946 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-memcached-tls-certs\") pod \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.673987 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvjtx\" (UniqueName: \"kubernetes.io/projected/1da21583-02a3-4a99-a05c-976f017fb31c-kube-api-access-hvjtx\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674014 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-config-data\") pod \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674050 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07533556-6a9f-4844-be7d-f9c9cf8c53a4-logs\") pod \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674073 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-combined-ca-bundle\") pod \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674100 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kube-api-access-9mdlw\") pod \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674119 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-combined-ca-bundle\") pod \"c26363be-cfa7-49f5-82a2-709c67b44622\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674149 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpqp7\" (UniqueName: \"kubernetes.io/projected/074e719c-b46b-4f91-ae2d-e7f30368a8ae-kube-api-access-cpqp7\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674174 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-combined-ca-bundle\") pod \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\" (UID: \"f079df7d-6aa6-4eab-8a9a-3b4bc329f139\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674192 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-httpd-run\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674210 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-scripts\") pod \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674239 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-combined-ca-bundle\") pod \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\" (UID: \"27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674283 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-combined-ca-bundle\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674309 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data-custom\") pod \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674335 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-config-data\") pod \"c26363be-cfa7-49f5-82a2-709c67b44622\" (UID: \"c26363be-cfa7-49f5-82a2-709c67b44622\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-public-tls-certs\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674386 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674404 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674435 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-config-data\") pod \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674460 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-logs\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674487 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-scripts\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674506 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-nova-metadata-tls-certs\") pod \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\" (UID: \"07533556-6a9f-4844-be7d-f9c9cf8c53a4\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674522 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-config-data\") pod \"1da21583-02a3-4a99-a05c-976f017fb31c\" (UID: \"1da21583-02a3-4a99-a05c-976f017fb31c\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674538 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074e719c-b46b-4f91-ae2d-e7f30368a8ae-logs\") pod \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\" (UID: \"074e719c-b46b-4f91-ae2d-e7f30368a8ae\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674555 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-etc-machine-id\") pod \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.674570 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxlz\" (UniqueName: \"kubernetes.io/projected/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-kube-api-access-6jxlz\") pod \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\" (UID: \"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.677790 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46d0ac5c-1d20-4b80-be1b-21ad2641b215-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678070 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7flwz\" (UniqueName: \"kubernetes.io/projected/46d0ac5c-1d20-4b80-be1b-21ad2641b215-kube-api-access-7flwz\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678190 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678265 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b863561a-440f-4e92-a8f3-4786a24d0a5f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678348 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b863561a-440f-4e92-a8f3-4786a24d0a5f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678807 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/b863561a-440f-4e92-a8f3-4786a24d0a5f-kube-api-access-vbsmr\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678899 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.678967 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44q94\" (UniqueName: \"kubernetes.io/projected/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-api-access-44q94\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.679037 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.679743 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "f079df7d-6aa6-4eab-8a9a-3b4bc329f139" (UID: "f079df7d-6aa6-4eab-8a9a-3b4bc329f139"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.695921 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-config-data" (OuterVolumeSpecName: "config-data") pod "f079df7d-6aa6-4eab-8a9a-3b4bc329f139" (UID: "f079df7d-6aa6-4eab-8a9a-3b4bc329f139"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.697605 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07533556-6a9f-4844-be7d-f9c9cf8c53a4-logs" (OuterVolumeSpecName: "logs") pod "07533556-6a9f-4844-be7d-f9c9cf8c53a4" (UID: "07533556-6a9f-4844-be7d-f9c9cf8c53a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.700515 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/074e719c-b46b-4f91-ae2d-e7f30368a8ae-logs" (OuterVolumeSpecName: "logs") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.702817 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.708150 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" (UID: "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.709562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.710402 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-logs" (OuterVolumeSpecName: "logs") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.712981 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07533556-6a9f-4844-be7d-f9c9cf8c53a4-kube-api-access-6nj6w" (OuterVolumeSpecName: "kube-api-access-6nj6w") pod "07533556-6a9f-4844-be7d-f9c9cf8c53a4" (UID: "07533556-6a9f-4844-be7d-f9c9cf8c53a4"). InnerVolumeSpecName "kube-api-access-6nj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.713053 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-kube-api-access-6jxlz" (OuterVolumeSpecName: "kube-api-access-6jxlz") pod "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" (UID: "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a"). InnerVolumeSpecName "kube-api-access-6jxlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.740896 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.742934 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-scripts" (OuterVolumeSpecName: "scripts") pod "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" (UID: "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.742911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.743115 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074e719c-b46b-4f91-ae2d-e7f30368a8ae-kube-api-access-cpqp7" (OuterVolumeSpecName: "kube-api-access-cpqp7") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "kube-api-access-cpqp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.743049 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26816cde-a8b6-41a2-ab12-46f8aeebbb0d" (UID: "26816cde-a8b6-41a2-ab12-46f8aeebbb0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.751664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kube-api-access-9mdlw" (OuterVolumeSpecName: "kube-api-access-9mdlw") pod "f079df7d-6aa6-4eab-8a9a-3b4bc329f139" (UID: "f079df7d-6aa6-4eab-8a9a-3b4bc329f139"). InnerVolumeSpecName "kube-api-access-9mdlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.751827 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26363be-cfa7-49f5-82a2-709c67b44622-kube-api-access-hs78t" (OuterVolumeSpecName: "kube-api-access-hs78t") pod "c26363be-cfa7-49f5-82a2-709c67b44622" (UID: "c26363be-cfa7-49f5-82a2-709c67b44622"). InnerVolumeSpecName "kube-api-access-hs78t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.751935 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.752093 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "26816cde-a8b6-41a2-ab12-46f8aeebbb0d" (UID: "26816cde-a8b6-41a2-ab12-46f8aeebbb0d"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.752332 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da21583-02a3-4a99-a05c-976f017fb31c-kube-api-access-hvjtx" (OuterVolumeSpecName: "kube-api-access-hvjtx") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "kube-api-access-hvjtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.752436 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" (UID: "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.752524 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-kube-api-access-998lv" (OuterVolumeSpecName: "kube-api-access-998lv") pod "27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" (UID: "27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245"). InnerVolumeSpecName "kube-api-access-998lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.757653 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.762490 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-scripts" (OuterVolumeSpecName: "scripts") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.785954 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-combined-ca-bundle\") pod \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.786028 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4307ff9-78bb-48ec-8096-6e06ff22e19b-logs\") pod \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.786389 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-metrics-certs-tls-certs\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.786879 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data\") pod \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.786951 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data-custom\") pod \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpbhx\" (UniqueName: \"kubernetes.io/projected/a4307ff9-78bb-48ec-8096-6e06ff22e19b-kube-api-access-bpbhx\") pod \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\" (UID: \"a4307ff9-78bb-48ec-8096-6e06ff22e19b\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787126 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-config\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787188 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwnxk\" (UniqueName: \"kubernetes.io/projected/92ee6ab7-feb7-4dbd-881a-b8250652aef9-kube-api-access-qwnxk\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787452 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-northd-tls-certs\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787530 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-rundir\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787633 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-combined-ca-bundle\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787710 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-scripts\") pod \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\" (UID: \"92ee6ab7-feb7-4dbd-881a-b8250652aef9\") " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.787975 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4307ff9-78bb-48ec-8096-6e06ff22e19b-logs" (OuterVolumeSpecName: "logs") pod "a4307ff9-78bb-48ec-8096-6e06ff22e19b" (UID: "a4307ff9-78bb-48ec-8096-6e06ff22e19b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.788394 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-config" (OuterVolumeSpecName: "config") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.790906 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795348 4861 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795389 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795403 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795413 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795444 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795493 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795507 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795522 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795532 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074e719c-b46b-4f91-ae2d-e7f30368a8ae-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795544 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795556 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxlz\" (UniqueName: \"kubernetes.io/projected/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-kube-api-access-6jxlz\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795579 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nj6w\" (UniqueName: \"kubernetes.io/projected/07533556-6a9f-4844-be7d-f9c9cf8c53a4-kube-api-access-6nj6w\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795593 4861 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795604 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-998lv\" (UniqueName: \"kubernetes.io/projected/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-kube-api-access-998lv\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795615 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs78t\" (UniqueName: \"kubernetes.io/projected/c26363be-cfa7-49f5-82a2-709c67b44622-kube-api-access-hs78t\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795629 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4307ff9-78bb-48ec-8096-6e06ff22e19b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795639 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795650 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvjtx\" (UniqueName: \"kubernetes.io/projected/1da21583-02a3-4a99-a05c-976f017fb31c-kube-api-access-hvjtx\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795664 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795674 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07533556-6a9f-4844-be7d-f9c9cf8c53a4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795684 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-kube-api-access-9mdlw\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795697 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795710 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpqp7\" (UniqueName: \"kubernetes.io/projected/074e719c-b46b-4f91-ae2d-e7f30368a8ae-kube-api-access-cpqp7\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795721 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da21583-02a3-4a99-a05c-976f017fb31c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.795731 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.796520 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-scripts" (OuterVolumeSpecName: "scripts") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.851145 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.851253 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a4307ff9-78bb-48ec-8096-6e06ff22e19b" (UID: "a4307ff9-78bb-48ec-8096-6e06ff22e19b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.851455 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ee6ab7-feb7-4dbd-881a-b8250652aef9-kube-api-access-qwnxk" (OuterVolumeSpecName: "kube-api-access-qwnxk") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "kube-api-access-qwnxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.854562 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q996h" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" probeResult="failure" output="command timed out" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.858651 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4307ff9-78bb-48ec-8096-6e06ff22e19b-kube-api-access-bpbhx" (OuterVolumeSpecName: "kube-api-access-bpbhx") pod "a4307ff9-78bb-48ec-8096-6e06ff22e19b" (UID: "a4307ff9-78bb-48ec-8096-6e06ff22e19b"). InnerVolumeSpecName "kube-api-access-bpbhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.886222 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-config-data" (OuterVolumeSpecName: "config-data") pod "07533556-6a9f-4844-be7d-f9c9cf8c53a4" (UID: "07533556-6a9f-4844-be7d-f9c9cf8c53a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.898759 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.898805 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpbhx\" (UniqueName: \"kubernetes.io/projected/a4307ff9-78bb-48ec-8096-6e06ff22e19b-kube-api-access-bpbhx\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.898836 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwnxk\" (UniqueName: \"kubernetes.io/projected/92ee6ab7-feb7-4dbd-881a-b8250652aef9-kube-api-access-qwnxk\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.898848 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.898859 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: I0219 13:34:09.898869 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ee6ab7-feb7-4dbd-881a-b8250652aef9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.898975 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:09 crc kubenswrapper[4861]: E0219 13:34:09.899074 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data podName:fe64a04b-1266-4b02-88e5-191f4a974422 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:17.899047269 +0000 UTC m=+1472.560150497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data") pod "rabbitmq-cell1-server-0" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422") : configmap "rabbitmq-cell1-config-data" not found Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.003847 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="279a265d-0cc8-45af-82ba-b8a485796fae" path="/var/lib/kubelet/pods/279a265d-0cc8-45af-82ba-b8a485796fae/volumes" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.009983 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a1e51d-a60d-4f7f-8300-9ef99e3da2a6" path="/var/lib/kubelet/pods/30a1e51d-a60d-4f7f-8300-9ef99e3da2a6/volumes" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.012023 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aca02ec-903d-4ddd-a7df-25d323ed6dc1" path="/var/lib/kubelet/pods/3aca02ec-903d-4ddd-a7df-25d323ed6dc1/volumes" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.018150 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" path="/var/lib/kubelet/pods/77e9ae58-534e-4312-8b56-9ec6708995ac/volumes" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.022900 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" path="/var/lib/kubelet/pods/bce14944-29de-44e7-9ad4-bb056cc6d656/volumes" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.024479 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff11b43a-9b7c-42c8-afac-6f66908975dc" path="/var/lib/kubelet/pods/ff11b43a-9b7c-42c8-afac-6f66908975dc/volumes" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.031319 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q996h" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" probeResult="failure" output=< Feb 19 13:34:10 crc kubenswrapper[4861]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Feb 19 13:34:10 crc kubenswrapper[4861]: > Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.054128 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-config-data" (OuterVolumeSpecName: "config-data") pod "27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" (UID: "27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.102707 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.103284 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle\") pod \"b863561a-440f-4e92-a8f3-4786a24d0a5f\" (UID: \"b863561a-440f-4e92-a8f3-4786a24d0a5f\") " Feb 19 13:34:10 crc kubenswrapper[4861]: W0219 13:34:10.103484 4861 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b863561a-440f-4e92-a8f3-4786a24d0a5f/volumes/kubernetes.io~secret/combined-ca-bundle Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.103523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.103788 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.103806 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.124634 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07533556-6a9f-4844-be7d-f9c9cf8c53a4" (UID: "07533556-6a9f-4844-be7d-f9c9cf8c53a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.148258 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-config-data" (OuterVolumeSpecName: "config-data") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.148300 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.158635 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-config-data" (OuterVolumeSpecName: "config-data") pod "c26363be-cfa7-49f5-82a2-709c67b44622" (UID: "c26363be-cfa7-49f5-82a2-709c67b44622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.161932 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4307ff9-78bb-48ec-8096-6e06ff22e19b" (UID: "a4307ff9-78bb-48ec-8096-6e06ff22e19b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.166091 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "26816cde-a8b6-41a2-ab12-46f8aeebbb0d" (UID: "26816cde-a8b6-41a2-ab12-46f8aeebbb0d"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.209921 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.209961 4861 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26816cde-a8b6-41a2-ab12-46f8aeebbb0d-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.209973 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.209982 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.209992 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.210001 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.233896 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.234363 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.240552 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f079df7d-6aa6-4eab-8a9a-3b4bc329f139" (UID: "f079df7d-6aa6-4eab-8a9a-3b4bc329f139"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.254569 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data" (OuterVolumeSpecName: "config-data") pod "a4307ff9-78bb-48ec-8096-6e06ff22e19b" (UID: "a4307ff9-78bb-48ec-8096-6e06ff22e19b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.260575 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.267911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c26363be-cfa7-49f5-82a2-709c67b44622" (UID: "c26363be-cfa7-49f5-82a2-709c67b44622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.273070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data" (OuterVolumeSpecName: "config-data") pod "b863561a-440f-4e92-a8f3-4786a24d0a5f" (UID: "b863561a-440f-4e92-a8f3-4786a24d0a5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312562 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312827 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b863561a-440f-4e92-a8f3-4786a24d0a5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312837 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312847 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312854 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4307ff9-78bb-48ec-8096-6e06ff22e19b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312865 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26363be-cfa7-49f5-82a2-709c67b44622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.312873 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.317715 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" (UID: "27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.319902 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" (UID: "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.328583 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.329198 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-config-data" (OuterVolumeSpecName: "config-data") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.337177 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data" (OuterVolumeSpecName: "config-data") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.353475 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "074e719c-b46b-4f91-ae2d-e7f30368a8ae" (UID: "074e719c-b46b-4f91-ae2d-e7f30368a8ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.357185 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "07533556-6a9f-4844-be7d-f9c9cf8c53a4" (UID: "07533556-6a9f-4844-be7d-f9c9cf8c53a4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.361815 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.388190 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.396188 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "46d0ac5c-1d20-4b80-be1b-21ad2641b215" (UID: "46d0ac5c-1d20-4b80-be1b-21ad2641b215"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.408817 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1da21583-02a3-4a99-a05c-976f017fb31c" (UID: "1da21583-02a3-4a99-a05c-976f017fb31c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.415704 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.416160 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "f079df7d-6aa6-4eab-8a9a-3b4bc329f139" (UID: "f079df7d-6aa6-4eab-8a9a-3b4bc329f139"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.417403 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.417965 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.417988 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418003 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46d0ac5c-1d20-4b80-be1b-21ad2641b215-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418018 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418028 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418040 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/074e719c-b46b-4f91-ae2d-e7f30368a8ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418051 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418062 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07533556-6a9f-4844-be7d-f9c9cf8c53a4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418073 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418083 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418094 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da21583-02a3-4a99-a05c-976f017fb31c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.418184 4861 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f079df7d-6aa6-4eab-8a9a-3b4bc329f139-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.421841 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data" (OuterVolumeSpecName: "config-data") pod "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" (UID: "ea376614-9f7c-4d27-aa0b-a0dba5c99a6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.460284 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "92ee6ab7-feb7-4dbd-881a-b8250652aef9" (UID: "92ee6ab7-feb7-4dbd-881a-b8250652aef9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.528379 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.528408 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ee6ab7-feb7-4dbd-881a-b8250652aef9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.547044 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.553210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c26363be-cfa7-49f5-82a2-709c67b44622","Type":"ContainerDied","Data":"eb2c8eb1382307580653b47152351783d1a3bc4dea4d1676e9ad1afc136cd805"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.553276 4861 scope.go:117] "RemoveContainer" containerID="b32d612f73bbe86a4cf54136585330c0e0f86c939e661fda48a99f88a3862277" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.553294 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.558072 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.558081 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea376614-9f7c-4d27-aa0b-a0dba5c99a6a","Type":"ContainerDied","Data":"dc9e11a4270ec7fafbd8e72206f731cb4ae61a39e2a9eb320a3081a5d10f8c13"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.562299 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.575071 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ee6ab7-feb7-4dbd-881a-b8250652aef9/ovn-northd/0.log" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.575238 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.575605 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ee6ab7-feb7-4dbd-881a-b8250652aef9","Type":"ContainerDied","Data":"64069a6ca39e06e805ac89d3b0154e0cdab7ee86b9afa4ae9dc1d783fb0ff1bb"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.577953 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kfscq" event={"ID":"8245aba7-ec9b-4b09-a3ba-691c370db0cf","Type":"ContainerDied","Data":"7b4a1d0b8abb14480babc36a762044e9e537e0dbc75360e7ce58df78618a7193"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.580727 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b4a1d0b8abb14480babc36a762044e9e537e0dbc75360e7ce58df78618a7193" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.584016 4861 scope.go:117] "RemoveContainer" containerID="6726e1fe83695be57e870a22efef89e68bbd009b8791859e1a75a341ca4e9ea7" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.585227 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.594930 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"707836a9-478e-4110-b5f5-9ee7e6b46e21","Type":"ContainerDied","Data":"d62489fee003ad3856feb1155bf068125930e2a162ff64fde39bc1e2574255ea"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.595206 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.608841 4861 generic.go:334] "Generic (PLEG): container finished" podID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerID="98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324" exitCode=0 Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.608907 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c881f3a1-3450-4ca9-8e8a-1c3d67e46770","Type":"ContainerDied","Data":"98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.608938 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c881f3a1-3450-4ca9-8e8a-1c3d67e46770","Type":"ContainerDied","Data":"14feff8e8b18cb35665e312e89f35a358158af874f735695bbf9133b4dc5ef5f"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.608950 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14feff8e8b18cb35665e312e89f35a358158af874f735695bbf9133b4dc5ef5f" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.612274 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" event={"ID":"a4307ff9-78bb-48ec-8096-6e06ff22e19b","Type":"ContainerDied","Data":"d8175cd89c4d0dc5bc6f2778856c4aced2ad9ae2cf793adc4fd6a8cb21dd7c71"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.612566 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6ffbcbb99f-phcxs" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629362 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckddl\" (UniqueName: \"kubernetes.io/projected/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-kube-api-access-ckddl\") pod \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629478 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb6tp\" (UniqueName: \"kubernetes.io/projected/9211a2d8-8917-464d-a790-efc469302556-kube-api-access-rb6tp\") pod \"9211a2d8-8917-464d-a790-efc469302556\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629537 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-config-data\") pod \"707836a9-478e-4110-b5f5-9ee7e6b46e21\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629567 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-internal-tls-certs\") pod \"707836a9-478e-4110-b5f5-9ee7e6b46e21\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629593 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707836a9-478e-4110-b5f5-9ee7e6b46e21-logs\") pod \"707836a9-478e-4110-b5f5-9ee7e6b46e21\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629653 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8892k\" (UniqueName: \"kubernetes.io/projected/707836a9-478e-4110-b5f5-9ee7e6b46e21-kube-api-access-8892k\") pod \"707836a9-478e-4110-b5f5-9ee7e6b46e21\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629701 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-combined-ca-bundle\") pod \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.629744 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-logs\") pod \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630694 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data-custom\") pod \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630749 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-config-data\") pod \"9211a2d8-8917-464d-a790-efc469302556\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630786 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-public-tls-certs\") pod \"707836a9-478e-4110-b5f5-9ee7e6b46e21\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630818 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data\") pod \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\" (UID: \"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630845 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-combined-ca-bundle\") pod \"9211a2d8-8917-464d-a790-efc469302556\" (UID: \"9211a2d8-8917-464d-a790-efc469302556\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-logs" (OuterVolumeSpecName: "logs") pod "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" (UID: "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.630877 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-combined-ca-bundle\") pod \"707836a9-478e-4110-b5f5-9ee7e6b46e21\" (UID: \"707836a9-478e-4110-b5f5-9ee7e6b46e21\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.631243 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/707836a9-478e-4110-b5f5-9ee7e6b46e21-logs" (OuterVolumeSpecName: "logs") pod "707836a9-478e-4110-b5f5-9ee7e6b46e21" (UID: "707836a9-478e-4110-b5f5-9ee7e6b46e21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.632441 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707836a9-478e-4110-b5f5-9ee7e6b46e21-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.632470 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.641319 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707836a9-478e-4110-b5f5-9ee7e6b46e21-kube-api-access-8892k" (OuterVolumeSpecName: "kube-api-access-8892k") pod "707836a9-478e-4110-b5f5-9ee7e6b46e21" (UID: "707836a9-478e-4110-b5f5-9ee7e6b46e21"). InnerVolumeSpecName "kube-api-access-8892k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.642273 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.642487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9211a2d8-8917-464d-a790-efc469302556","Type":"ContainerDied","Data":"51f4b662f96e909619e3c94722083f0117a176c16a55ed299c080923c23416cf"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.643983 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-kube-api-access-ckddl" (OuterVolumeSpecName: "kube-api-access-ckddl") pod "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" (UID: "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef"). InnerVolumeSpecName "kube-api-access-ckddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.646520 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" (UID: "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.648553 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.662720 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.664713 4861 scope.go:117] "RemoveContainer" containerID="55e9e83bff1da6a4f3c4c60dbe202de73f4077183db64ccdd0ed5fa347035067" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.666648 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9211a2d8-8917-464d-a790-efc469302556-kube-api-access-rb6tp" (OuterVolumeSpecName: "kube-api-access-rb6tp") pod "9211a2d8-8917-464d-a790-efc469302556" (UID: "9211a2d8-8917-464d-a790-efc469302556"). InnerVolumeSpecName "kube-api-access-rb6tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.667066 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" event={"ID":"5d002f91-22f1-4ebd-8bc9-04e81e4a00ef","Type":"ContainerDied","Data":"6d7a71c53242fcf62e136045d6262d60179ab6d7c9f178bb1105c59863455c6d"} Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.667151 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-545d79c874-vmrzt" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.668983 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.669518 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d6c467cc6-ng4wh" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.669856 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.669857 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.669894 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.669916 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39dc-account-create-update-wxznp" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.669964 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6799fd8d6-p6tpl" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.670045 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.685560 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "707836a9-478e-4110-b5f5-9ee7e6b46e21" (UID: "707836a9-478e-4110-b5f5-9ee7e6b46e21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.697657 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-config-data" (OuterVolumeSpecName: "config-data") pod "9211a2d8-8917-464d-a790-efc469302556" (UID: "9211a2d8-8917-464d-a790-efc469302556"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.700133 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "707836a9-478e-4110-b5f5-9ee7e6b46e21" (UID: "707836a9-478e-4110-b5f5-9ee7e6b46e21"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.708554 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "707836a9-478e-4110-b5f5-9ee7e6b46e21" (UID: "707836a9-478e-4110-b5f5-9ee7e6b46e21"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.714520 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" (UID: "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.714657 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-config-data" (OuterVolumeSpecName: "config-data") pod "707836a9-478e-4110-b5f5-9ee7e6b46e21" (UID: "707836a9-478e-4110-b5f5-9ee7e6b46e21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.718589 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9211a2d8-8917-464d-a790-efc469302556" (UID: "9211a2d8-8917-464d-a790-efc469302556"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749759 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749798 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749811 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9211a2d8-8917-464d-a790-efc469302556-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749820 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749830 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckddl\" (UniqueName: \"kubernetes.io/projected/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-kube-api-access-ckddl\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749848 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb6tp\" (UniqueName: \"kubernetes.io/projected/9211a2d8-8917-464d-a790-efc469302556-kube-api-access-rb6tp\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749857 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749866 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707836a9-478e-4110-b5f5-9ee7e6b46e21-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749877 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8892k\" (UniqueName: \"kubernetes.io/projected/707836a9-478e-4110-b5f5-9ee7e6b46e21-kube-api-access-8892k\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749886 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.749897 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.752930 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.766945 4861 scope.go:117] "RemoveContainer" containerID="66481bc6acbb5d53d8e31bc7da07ae265932f327cf54cd5cb7411c629205684f" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.784468 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data" (OuterVolumeSpecName: "config-data") pod "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" (UID: "5d002f91-22f1-4ebd-8bc9-04e81e4a00ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: E0219 13:34:10.806439 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc881f3a1_3450_4ca9_8e8a_1c3d67e46770.slice/crio-98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc881f3a1_3450_4ca9_8e8a_1c3d67e46770.slice/crio-conmon-98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26816cde_a8b6_41a2_ab12_46f8aeebbb0d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod074e719c_b46b_4f91_ae2d_e7f30368a8ae.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33cdc553_cf61_4fb3_93a0_8d0d4da3cfcc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4307ff9_78bb_48ec_8096_6e06ff22e19b.slice/crio-d8175cd89c4d0dc5bc6f2778856c4aced2ad9ae2cf793adc4fd6a8cb21dd7c71\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26363be_cfa7_49f5_82a2_709c67b44622.slice/crio-eb2c8eb1382307580653b47152351783d1a3bc4dea4d1676e9ad1afc136cd805\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod074e719c_b46b_4f91_ae2d_e7f30368a8ae.slice/crio-90fc7f8f56b065d37a3d8b95fed12c8241808ee1000d5c6034774c368d36148a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26816cde_a8b6_41a2_ab12_46f8aeebbb0d.slice/crio-7cd39315016fcfc501940883ccb5c45657e85891f3e447edd02bda2d924716d5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27cfe279_5bf2_4ea6_9fb3_cf1fcb1f8245.slice/crio-e0ccd9d20249ffa43e258c1eb24a12f2f9b4bbb300cd486e80e18f584e72646b\": RecentStats: unable to find data in memory cache]" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.851472 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8245aba7-ec9b-4b09-a3ba-691c370db0cf-operator-scripts\") pod \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.852138 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvkr\" (UniqueName: \"kubernetes.io/projected/8245aba7-ec9b-4b09-a3ba-691c370db0cf-kube-api-access-8fvkr\") pod \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\" (UID: \"8245aba7-ec9b-4b09-a3ba-691c370db0cf\") " Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.852523 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.857217 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8245aba7-ec9b-4b09-a3ba-691c370db0cf-kube-api-access-8fvkr" (OuterVolumeSpecName: "kube-api-access-8fvkr") pod "8245aba7-ec9b-4b09-a3ba-691c370db0cf" (UID: "8245aba7-ec9b-4b09-a3ba-691c370db0cf"). InnerVolumeSpecName "kube-api-access-8fvkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.857493 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8245aba7-ec9b-4b09-a3ba-691c370db0cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8245aba7-ec9b-4b09-a3ba-691c370db0cf" (UID: "8245aba7-ec9b-4b09-a3ba-691c370db0cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.945963 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.954602 4861 scope.go:117] "RemoveContainer" containerID="37b5efa091b40af32f79760c01361587fb0151c53e6b051790c4ce833de471f6" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.956358 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fvkr\" (UniqueName: \"kubernetes.io/projected/8245aba7-ec9b-4b09-a3ba-691c370db0cf-kube-api-access-8fvkr\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: I0219 13:34:10.956386 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8245aba7-ec9b-4b09-a3ba-691c370db0cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:10 crc kubenswrapper[4861]: E0219 13:34:10.956474 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 13:34:10 crc kubenswrapper[4861]: E0219 13:34:10.956528 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data podName:b117524a-eaad-4666-9e0e-bda909b2ad30 nodeName:}" failed. No retries permitted until 2026-02-19 13:34:18.95650583 +0000 UTC m=+1473.617609048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data") pod "rabbitmq-server-0" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30") : configmap "rabbitmq-config-data" not found Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:10.996484 4861 scope.go:117] "RemoveContainer" containerID="7955b99165b940ef4462fa1d533ec8daadebf57e1422d1ab3180cb3a66fc27cb" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.001623 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.018213 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.032738 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.042216 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.049476 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057114 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057301 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwk77\" (UniqueName: \"kubernetes.io/projected/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kube-api-access-mwk77\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057340 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kolla-config\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057369 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-operator-scripts\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057405 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-combined-ca-bundle\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057464 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057525 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-galera-tls-certs\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057617 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-default\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.057644 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-generated\") pod \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\" (UID: \"c881f3a1-3450-4ca9-8e8a-1c3d67e46770\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.058608 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.059272 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.061438 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.064961 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.065007 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d6c467cc6-ng4wh"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.071890 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kube-api-access-mwk77" (OuterVolumeSpecName: "kube-api-access-mwk77") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "kube-api-access-mwk77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.071941 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d6c467cc6-ng4wh"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.081229 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.107042 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.126868 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.135562 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.136694 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c881f3a1-3450-4ca9-8e8a-1c3d67e46770" (UID: "c881f3a1-3450-4ca9-8e8a-1c3d67e46770"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.146782 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.154024 4861 scope.go:117] "RemoveContainer" containerID="ec45a853d859de7712ce8a8df163d92e6be4aa74bd810bb6e043ce0ce739485a" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.154162 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159399 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159449 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159465 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwk77\" (UniqueName: \"kubernetes.io/projected/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kube-api-access-mwk77\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159477 4861 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159489 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159499 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159525 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.159536 4861 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c881f3a1-3450-4ca9-8e8a-1c3d67e46770-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.171739 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6799fd8d6-p6tpl"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.186282 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6799fd8d6-p6tpl"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.188438 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.198286 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6ffbcbb99f-phcxs"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.212487 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6ffbcbb99f-phcxs"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.224440 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.229878 4861 scope.go:117] "RemoveContainer" containerID="fc0693a9e1476f2b6d033af8f56ce772a2fac61eb55bf48764b0906664653ac4" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.237541 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.260315 4861 scope.go:117] "RemoveContainer" containerID="53911b7e6ee036738f82d06e28457c9efb4b7e608b7a20ad34bd125adf651646" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.260889 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.269276 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-39dc-account-create-update-wxznp"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.276511 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.281310 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-39dc-account-create-update-wxznp"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.289287 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.294440 4861 scope.go:117] "RemoveContainer" containerID="51828dec5ed3bf469238caf516c39172b72277a44dd7431a9ff705c60186eff0" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.296211 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.304178 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.325363 4861 scope.go:117] "RemoveContainer" containerID="1d619313cf3eb9116f4f061ab19e9d256b6f4c3706035768630ec087a8ab9bd7" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.326017 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.326052 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.330933 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.337634 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-545d79c874-vmrzt"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.343184 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-545d79c874-vmrzt"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.347297 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.351403 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362098 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-tls\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362149 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-server-conf\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362199 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-erlang-cookie\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362230 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-plugins\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362845 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362877 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe64a04b-1266-4b02-88e5-191f4a974422-erlang-cookie-secret\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362921 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-confd\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362944 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vkt\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-kube-api-access-z7vkt\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.362972 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe64a04b-1266-4b02-88e5-191f4a974422-pod-info\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.363003 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.363018 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-plugins-conf\") pod \"fe64a04b-1266-4b02-88e5-191f4a974422\" (UID: \"fe64a04b-1266-4b02-88e5-191f4a974422\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.363672 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfkb2\" (UniqueName: \"kubernetes.io/projected/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-kube-api-access-qfkb2\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.363692 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.363702 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.364160 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.364409 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.367395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fe64a04b-1266-4b02-88e5-191f4a974422-pod-info" (OuterVolumeSpecName: "pod-info") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.382230 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe64a04b-1266-4b02-88e5-191f4a974422-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.382590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-kube-api-access-z7vkt" (OuterVolumeSpecName: "kube-api-access-z7vkt") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "kube-api-access-z7vkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.388815 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.389349 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.395355 4861 scope.go:117] "RemoveContainer" containerID="5e5b1d9b0913f678bfeffc302d49785832edca29f1e96866dc26ab6c9f4872d5" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.395579 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data" (OuterVolumeSpecName: "config-data") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.409609 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-server-conf" (OuterVolumeSpecName: "server-conf") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.443146 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465586 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe64a04b-1266-4b02-88e5-191f4a974422-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465631 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7vkt\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-kube-api-access-z7vkt\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465643 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe64a04b-1266-4b02-88e5-191f4a974422-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465736 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465770 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465785 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465795 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465806 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe64a04b-1266-4b02-88e5-191f4a974422-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.465816 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.489104 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.511360 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fe64a04b-1266-4b02-88e5-191f4a974422" (UID: "fe64a04b-1266-4b02-88e5-191f4a974422"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.566459 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-internal-tls-certs\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.566668 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-public-tls-certs\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.566771 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-config-data\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.566834 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-fernet-keys\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.566902 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khktc\" (UniqueName: \"kubernetes.io/projected/382166c8-355e-407b-9721-3eee34966095-kube-api-access-khktc\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.567044 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-credential-keys\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.567169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-scripts\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.567257 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-combined-ca-bundle\") pod \"382166c8-355e-407b-9721-3eee34966095\" (UID: \"382166c8-355e-407b-9721-3eee34966095\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.567633 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe64a04b-1266-4b02-88e5-191f4a974422-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.567696 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.570528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.572772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.580964 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-scripts" (OuterVolumeSpecName: "scripts") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.581593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382166c8-355e-407b-9721-3eee34966095-kube-api-access-khktc" (OuterVolumeSpecName: "kube-api-access-khktc") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "kube-api-access-khktc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.591108 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.591632 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-config-data" (OuterVolumeSpecName: "config-data") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.620007 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.622575 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "382166c8-355e-407b-9721-3eee34966095" (UID: "382166c8-355e-407b-9721-3eee34966095"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673373 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673434 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673448 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673460 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673473 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khktc\" (UniqueName: \"kubernetes.io/projected/382166c8-355e-407b-9721-3eee34966095-kube-api-access-khktc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673487 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673498 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.673509 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/382166c8-355e-407b-9721-3eee34966095-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.686174 4861 generic.go:334] "Generic (PLEG): container finished" podID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerID="a34aea6a9dce7447619085b8bdcc194d614605d384336f31f474bb36345d67a2" exitCode=0 Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.686262 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b117524a-eaad-4666-9e0e-bda909b2ad30","Type":"ContainerDied","Data":"a34aea6a9dce7447619085b8bdcc194d614605d384336f31f474bb36345d67a2"} Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.690002 4861 generic.go:334] "Generic (PLEG): container finished" podID="fe64a04b-1266-4b02-88e5-191f4a974422" containerID="d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81" exitCode=0 Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.690085 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe64a04b-1266-4b02-88e5-191f4a974422","Type":"ContainerDied","Data":"d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81"} Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.690107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe64a04b-1266-4b02-88e5-191f4a974422","Type":"ContainerDied","Data":"ed72ed1d3af93b4785b2d55b81c21bbe190549115677449dba63b55ba7a965f1"} Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.690129 4861 scope.go:117] "RemoveContainer" containerID="d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.690745 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.707765 4861 generic.go:334] "Generic (PLEG): container finished" podID="382166c8-355e-407b-9721-3eee34966095" containerID="734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d" exitCode=0 Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.707847 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c9494f487-fzm28" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.707973 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c9494f487-fzm28" event={"ID":"382166c8-355e-407b-9721-3eee34966095","Type":"ContainerDied","Data":"734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d"} Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.708075 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c9494f487-fzm28" event={"ID":"382166c8-355e-407b-9721-3eee34966095","Type":"ContainerDied","Data":"8803783c7750bf2430911b3f45cc1b1884f612db8facb09f4253866c4a458a13"} Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.708950 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.712954 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfscq" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.715628 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.729147 4861 scope.go:117] "RemoveContainer" containerID="3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774371 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b117524a-eaad-4666-9e0e-bda909b2ad30-pod-info\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774466 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-plugins-conf\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-erlang-cookie\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774545 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-server-conf\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774568 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-tls\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774596 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-confd\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774649 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b117524a-eaad-4666-9e0e-bda909b2ad30-erlang-cookie-secret\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774674 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcj6\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-kube-api-access-wkcj6\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774732 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-plugins\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.774753 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b117524a-eaad-4666-9e0e-bda909b2ad30\" (UID: \"b117524a-eaad-4666-9e0e-bda909b2ad30\") " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.783476 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b117524a-eaad-4666-9e0e-bda909b2ad30-pod-info" (OuterVolumeSpecName: "pod-info") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.783825 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.784512 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.790600 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.790642 4861 scope.go:117] "RemoveContainer" containerID="d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81" Feb 19 13:34:11 crc kubenswrapper[4861]: E0219 13:34:11.793606 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81\": container with ID starting with d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81 not found: ID does not exist" containerID="d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.793667 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81"} err="failed to get container status \"d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81\": rpc error: code = NotFound desc = could not find container \"d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81\": container with ID starting with d3405400dc8ba912020a46ce0cfbf537ec4969bbed50585c5f4aa5006d304d81 not found: ID does not exist" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.793706 4861 scope.go:117] "RemoveContainer" containerID="3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.794748 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b117524a-eaad-4666-9e0e-bda909b2ad30-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.794985 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: E0219 13:34:11.799681 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba\": container with ID starting with 3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba not found: ID does not exist" containerID="3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.799743 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba"} err="failed to get container status \"3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba\": rpc error: code = NotFound desc = could not find container \"3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba\": container with ID starting with 3cbfb3eab539b4a4fa2f026f3ff4040b3cbd06fffc551908b986a67d49a2acba not found: ID does not exist" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.799780 4861 scope.go:117] "RemoveContainer" containerID="734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.800276 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.801622 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-kube-api-access-wkcj6" (OuterVolumeSpecName: "kube-api-access-wkcj6") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "kube-api-access-wkcj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.813797 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.825571 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data" (OuterVolumeSpecName: "config-data") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.830105 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.830450 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-server-conf" (OuterVolumeSpecName: "server-conf") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.843885 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.864653 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.873024 4861 scope.go:117] "RemoveContainer" containerID="734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d" Feb 19 13:34:11 crc kubenswrapper[4861]: E0219 13:34:11.873694 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d\": container with ID starting with 734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d not found: ID does not exist" containerID="734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.873730 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d"} err="failed to get container status \"734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d\": rpc error: code = NotFound desc = could not find container \"734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d\": container with ID starting with 734b5999b62cb697fa384640a54b84e4030809dc81b17ff593f4ae3a0f78d52d not found: ID does not exist" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876821 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876850 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876860 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b117524a-eaad-4666-9e0e-bda909b2ad30-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876871 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876880 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcj6\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-kube-api-access-wkcj6\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876888 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876908 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876916 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b117524a-eaad-4666-9e0e-bda909b2ad30-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876925 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b117524a-eaad-4666-9e0e-bda909b2ad30-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.876936 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.882670 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b117524a-eaad-4666-9e0e-bda909b2ad30" (UID: "b117524a-eaad-4666-9e0e-bda909b2ad30"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.887573 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kfscq"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.892329 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.895326 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kfscq"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.901305 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6c9494f487-fzm28"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.906783 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6c9494f487-fzm28"] Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.977847 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b117524a-eaad-4666-9e0e-bda909b2ad30-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.977900 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.989473 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" path="/var/lib/kubelet/pods/074e719c-b46b-4f91-ae2d-e7f30368a8ae/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.990082 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" path="/var/lib/kubelet/pods/07533556-6a9f-4844-be7d-f9c9cf8c53a4/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.990680 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" path="/var/lib/kubelet/pods/1da21583-02a3-4a99-a05c-976f017fb31c/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.991834 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26816cde-a8b6-41a2-ab12-46f8aeebbb0d" path="/var/lib/kubelet/pods/26816cde-a8b6-41a2-ab12-46f8aeebbb0d/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.992271 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" path="/var/lib/kubelet/pods/27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.992626 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc" path="/var/lib/kubelet/pods/33cdc553-cf61-4fb3-93a0-8d0d4da3cfcc/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.992906 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382166c8-355e-407b-9721-3eee34966095" path="/var/lib/kubelet/pods/382166c8-355e-407b-9721-3eee34966095/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.994045 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" path="/var/lib/kubelet/pods/46d0ac5c-1d20-4b80-be1b-21ad2641b215/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.994838 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" path="/var/lib/kubelet/pods/5d002f91-22f1-4ebd-8bc9-04e81e4a00ef/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.996001 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" path="/var/lib/kubelet/pods/707836a9-478e-4110-b5f5-9ee7e6b46e21/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.996670 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8245aba7-ec9b-4b09-a3ba-691c370db0cf" path="/var/lib/kubelet/pods/8245aba7-ec9b-4b09-a3ba-691c370db0cf/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.997066 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9211a2d8-8917-464d-a790-efc469302556" path="/var/lib/kubelet/pods/9211a2d8-8917-464d-a790-efc469302556/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.997756 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" path="/var/lib/kubelet/pods/92ee6ab7-feb7-4dbd-881a-b8250652aef9/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.998793 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" path="/var/lib/kubelet/pods/a4307ff9-78bb-48ec-8096-6e06ff22e19b/volumes" Feb 19 13:34:11 crc kubenswrapper[4861]: I0219 13:34:11.999401 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" path="/var/lib/kubelet/pods/b863561a-440f-4e92-a8f3-4786a24d0a5f/volumes" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.000435 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26363be-cfa7-49f5-82a2-709c67b44622" path="/var/lib/kubelet/pods/c26363be-cfa7-49f5-82a2-709c67b44622/volumes" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.001028 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" path="/var/lib/kubelet/pods/c881f3a1-3450-4ca9-8e8a-1c3d67e46770/volumes" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.001555 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" path="/var/lib/kubelet/pods/ea376614-9f7c-4d27-aa0b-a0dba5c99a6a/volumes" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.002546 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" path="/var/lib/kubelet/pods/f079df7d-6aa6-4eab-8a9a-3b4bc329f139/volumes" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.003170 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" path="/var/lib/kubelet/pods/fe64a04b-1266-4b02-88e5-191f4a974422/volumes" Feb 19 13:34:12 crc kubenswrapper[4861]: E0219 13:34:12.384398 4861 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 19 13:34:12 crc kubenswrapper[4861]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-19T13:34:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 13:34:12 crc kubenswrapper[4861]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Feb 19 13:34:12 crc kubenswrapper[4861]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-q996h" message=< Feb 19 13:34:12 crc kubenswrapper[4861]: Exiting ovn-controller (1) [FAILED] Feb 19 13:34:12 crc kubenswrapper[4861]: Killing ovn-controller (1) [ OK ] Feb 19 13:34:12 crc kubenswrapper[4861]: Killing ovn-controller (1) with SIGKILL [ OK ] Feb 19 13:34:12 crc kubenswrapper[4861]: 2026-02-19T13:34:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 13:34:12 crc kubenswrapper[4861]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Feb 19 13:34:12 crc kubenswrapper[4861]: > Feb 19 13:34:12 crc kubenswrapper[4861]: E0219 13:34:12.384773 4861 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 19 13:34:12 crc kubenswrapper[4861]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-19T13:34:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 13:34:12 crc kubenswrapper[4861]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Feb 19 13:34:12 crc kubenswrapper[4861]: > pod="openstack/ovn-controller-q996h" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" containerID="cri-o://1fe2bd69f8790f32fe3ed2c80f24fe603fd6477d505ed84850d75392aec160f8" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.384940 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-q996h" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" containerID="cri-o://1fe2bd69f8790f32fe3ed2c80f24fe603fd6477d505ed84850d75392aec160f8" gracePeriod=22 Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.719956 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b117524a-eaad-4666-9e0e-bda909b2ad30","Type":"ContainerDied","Data":"4c331a487f3a55d9c2ebb39a7cc7eb25a9fa83b8847228fefe48276beb268f35"} Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.719990 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.720028 4861 scope.go:117] "RemoveContainer" containerID="a34aea6a9dce7447619085b8bdcc194d614605d384336f31f474bb36345d67a2" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.722591 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q996h_97eefa3e-8d45-46c5-bfa6-150d0255a15b/ovn-controller/0.log" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.722627 4861 generic.go:334] "Generic (PLEG): container finished" podID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerID="1fe2bd69f8790f32fe3ed2c80f24fe603fd6477d505ed84850d75392aec160f8" exitCode=137 Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.722679 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h" event={"ID":"97eefa3e-8d45-46c5-bfa6-150d0255a15b","Type":"ContainerDied","Data":"1fe2bd69f8790f32fe3ed2c80f24fe603fd6477d505ed84850d75392aec160f8"} Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.727685 4861 generic.go:334] "Generic (PLEG): container finished" podID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerID="e596ff917ea1fb5095cf558e3c5f097ddc50829b4c61ec3a615a77087e4cd4bb" exitCode=0 Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.727767 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f585bc76f-dg9rf" event={"ID":"cdaa2d03-6ae0-405a-af42-499d99ec711d","Type":"ContainerDied","Data":"e596ff917ea1fb5095cf558e3c5f097ddc50829b4c61ec3a615a77087e4cd4bb"} Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.894536 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q996h_97eefa3e-8d45-46c5-bfa6-150d0255a15b/ovn-controller/0.log" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.894596 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.909371 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.913267 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.922303 4861 scope.go:117] "RemoveContainer" containerID="f11100b3d10e0ed10dbc1ccc95f8c840822253bd67ae1be0a2829b7c7e5404fc" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.927156 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997136 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97eefa3e-8d45-46c5-bfa6-150d0255a15b-scripts\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997186 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-log-ovn\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-combined-ca-bundle\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997298 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run-ovn\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-public-tls-certs\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997408 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-httpd-config\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997486 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997521 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-ovn-controller-tls-certs\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997563 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-ovndb-tls-certs\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997572 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997599 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g5mw\" (UniqueName: \"kubernetes.io/projected/97eefa3e-8d45-46c5-bfa6-150d0255a15b-kube-api-access-8g5mw\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997691 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-internal-tls-certs\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997725 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfzrt\" (UniqueName: \"kubernetes.io/projected/cdaa2d03-6ae0-405a-af42-499d99ec711d-kube-api-access-dfzrt\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997796 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-config\") pod \"cdaa2d03-6ae0-405a-af42-499d99ec711d\" (UID: \"cdaa2d03-6ae0-405a-af42-499d99ec711d\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.997845 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-combined-ca-bundle\") pod \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\" (UID: \"97eefa3e-8d45-46c5-bfa6-150d0255a15b\") " Feb 19 13:34:12 crc kubenswrapper[4861]: I0219 13:34:12.998519 4861 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.014716 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run" (OuterVolumeSpecName: "var-run") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.016746 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.017639 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97eefa3e-8d45-46c5-bfa6-150d0255a15b-scripts" (OuterVolumeSpecName: "scripts") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.018837 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdaa2d03-6ae0-405a-af42-499d99ec711d-kube-api-access-dfzrt" (OuterVolumeSpecName: "kube-api-access-dfzrt") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "kube-api-access-dfzrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.018863 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.031492 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.031711 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97eefa3e-8d45-46c5-bfa6-150d0255a15b-kube-api-access-8g5mw" (OuterVolumeSpecName: "kube-api-access-8g5mw") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "kube-api-access-8g5mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.059854 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-config" (OuterVolumeSpecName: "config") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.059854 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.060884 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.068201 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.078443 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "97eefa3e-8d45-46c5-bfa6-150d0255a15b" (UID: "97eefa3e-8d45-46c5-bfa6-150d0255a15b"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.081942 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cdaa2d03-6ae0-405a-af42-499d99ec711d" (UID: "cdaa2d03-6ae0-405a-af42-499d99ec711d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100359 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100386 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100396 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100404 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100413 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100439 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100450 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g5mw\" (UniqueName: \"kubernetes.io/projected/97eefa3e-8d45-46c5-bfa6-150d0255a15b-kube-api-access-8g5mw\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100459 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100469 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfzrt\" (UniqueName: \"kubernetes.io/projected/cdaa2d03-6ae0-405a-af42-499d99ec711d-kube-api-access-dfzrt\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100477 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdaa2d03-6ae0-405a-af42-499d99ec711d-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100485 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eefa3e-8d45-46c5-bfa6-150d0255a15b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100493 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97eefa3e-8d45-46c5-bfa6-150d0255a15b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.100501 4861 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97eefa3e-8d45-46c5-bfa6-150d0255a15b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.108788 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201216 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-sg-core-conf-yaml\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201307 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2vf\" (UniqueName: \"kubernetes.io/projected/11e264a8-32df-4980-a6b8-eb1964d644b9-kube-api-access-nt2vf\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201336 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-ceilometer-tls-certs\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201357 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-scripts\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201372 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-combined-ca-bundle\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201401 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-config-data\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201451 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-log-httpd\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.201482 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-run-httpd\") pod \"11e264a8-32df-4980-a6b8-eb1964d644b9\" (UID: \"11e264a8-32df-4980-a6b8-eb1964d644b9\") " Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.202053 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.202139 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.204611 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e264a8-32df-4980-a6b8-eb1964d644b9-kube-api-access-nt2vf" (OuterVolumeSpecName: "kube-api-access-nt2vf") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "kube-api-access-nt2vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.205174 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-scripts" (OuterVolumeSpecName: "scripts") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.221027 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.242092 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.258876 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.294307 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-config-data" (OuterVolumeSpecName: "config-data") pod "11e264a8-32df-4980-a6b8-eb1964d644b9" (UID: "11e264a8-32df-4980-a6b8-eb1964d644b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303177 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303211 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303220 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11e264a8-32df-4980-a6b8-eb1964d644b9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303228 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303240 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt2vf\" (UniqueName: \"kubernetes.io/projected/11e264a8-32df-4980-a6b8-eb1964d644b9-kube-api-access-nt2vf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303251 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303258 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.303266 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e264a8-32df-4980-a6b8-eb1964d644b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.776536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f585bc76f-dg9rf" event={"ID":"cdaa2d03-6ae0-405a-af42-499d99ec711d","Type":"ContainerDied","Data":"7a4d203093c25a8bd89493b0c04ac98a0bc61a46fc5bbeb35c529cd9d56bc9c7"} Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.776642 4861 scope.go:117] "RemoveContainer" containerID="4da13ead2ea9ec2a3cf985ce57e0d64f8641678421b9b3e6a3695e68e35cfeb4" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.776686 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f585bc76f-dg9rf" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.782310 4861 generic.go:334] "Generic (PLEG): container finished" podID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerID="9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c" exitCode=0 Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.782363 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerDied","Data":"9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c"} Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.782385 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11e264a8-32df-4980-a6b8-eb1964d644b9","Type":"ContainerDied","Data":"ba74838fa8dd8f959bd0f528028aa4f632e59f8de7d472dc75e581aa76bfb1ab"} Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.782468 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.789412 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q996h_97eefa3e-8d45-46c5-bfa6-150d0255a15b/ovn-controller/0.log" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.789477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q996h" event={"ID":"97eefa3e-8d45-46c5-bfa6-150d0255a15b","Type":"ContainerDied","Data":"bfeedc322a9020738eee629141a95b7b04da979a31848d302d2fe0527c8906f5"} Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.789548 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q996h" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.808232 4861 scope.go:117] "RemoveContainer" containerID="e596ff917ea1fb5095cf558e3c5f097ddc50829b4c61ec3a615a77087e4cd4bb" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.849990 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.862199 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.870706 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f585bc76f-dg9rf"] Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.871685 4861 scope.go:117] "RemoveContainer" containerID="3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.875293 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f585bc76f-dg9rf"] Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.879359 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q996h"] Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.884193 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q996h"] Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.905921 4861 scope.go:117] "RemoveContainer" containerID="96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.922099 4861 scope.go:117] "RemoveContainer" containerID="9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.937956 4861 scope.go:117] "RemoveContainer" containerID="2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.965984 4861 scope.go:117] "RemoveContainer" containerID="3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6" Feb 19 13:34:13 crc kubenswrapper[4861]: E0219 13:34:13.980554 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6\": container with ID starting with 3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6 not found: ID does not exist" containerID="3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.980607 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6"} err="failed to get container status \"3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6\": rpc error: code = NotFound desc = could not find container \"3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6\": container with ID starting with 3cd0ac3239cff1282edbee81f3c0d497e15f737c6290e568ced9914379dbaec6 not found: ID does not exist" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.980640 4861 scope.go:117] "RemoveContainer" containerID="96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84" Feb 19 13:34:13 crc kubenswrapper[4861]: E0219 13:34:13.981750 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84\": container with ID starting with 96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84 not found: ID does not exist" containerID="96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.981781 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84"} err="failed to get container status \"96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84\": rpc error: code = NotFound desc = could not find container \"96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84\": container with ID starting with 96e096da5cc28914cdb4a90e4edba37087790b9cc227d970c910751b7cd84e84 not found: ID does not exist" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.981800 4861 scope.go:117] "RemoveContainer" containerID="9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c" Feb 19 13:34:13 crc kubenswrapper[4861]: E0219 13:34:13.982844 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c\": container with ID starting with 9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c not found: ID does not exist" containerID="9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.982880 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c"} err="failed to get container status \"9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c\": rpc error: code = NotFound desc = could not find container \"9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c\": container with ID starting with 9f6c4de0d600a26e221ad9f08bb51644c618ab4dae788de65e8a67c29ebd791c not found: ID does not exist" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.982898 4861 scope.go:117] "RemoveContainer" containerID="2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468" Feb 19 13:34:13 crc kubenswrapper[4861]: E0219 13:34:13.983154 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468\": container with ID starting with 2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468 not found: ID does not exist" containerID="2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.983183 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468"} err="failed to get container status \"2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468\": rpc error: code = NotFound desc = could not find container \"2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468\": container with ID starting with 2a3f1ab0fa078c08628336e0493b5bea70b63c6f50cb31d94e0bdcc4c18bb468 not found: ID does not exist" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.983199 4861 scope.go:117] "RemoveContainer" containerID="1fe2bd69f8790f32fe3ed2c80f24fe603fd6477d505ed84850d75392aec160f8" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.991168 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" path="/var/lib/kubelet/pods/11e264a8-32df-4980-a6b8-eb1964d644b9/volumes" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.993217 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" path="/var/lib/kubelet/pods/97eefa3e-8d45-46c5-bfa6-150d0255a15b/volumes" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.995743 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" path="/var/lib/kubelet/pods/b117524a-eaad-4666-9e0e-bda909b2ad30/volumes" Feb 19 13:34:13 crc kubenswrapper[4861]: I0219 13:34:13.997296 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" path="/var/lib/kubelet/pods/cdaa2d03-6ae0-405a-af42-499d99ec711d/volumes" Feb 19 13:34:14 crc kubenswrapper[4861]: I0219 13:34:14.117749 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d6c467cc6-ng4wh" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": dial tcp 10.217.0.165:9311: i/o timeout" Feb 19 13:34:14 crc kubenswrapper[4861]: I0219 13:34:14.117768 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d6c467cc6-ng4wh" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.396558 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.398868 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.398991 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.399523 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.399563 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.401210 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.405977 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:14 crc kubenswrapper[4861]: E0219 13:34:14.406070 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:14 crc kubenswrapper[4861]: I0219 13:34:14.432066 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: i/o timeout" Feb 19 13:34:16 crc kubenswrapper[4861]: I0219 13:34:16.613415 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: i/o timeout" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.397050 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.398923 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.402008 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.402569 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.402654 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.404033 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.411258 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.411500 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.563817 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmq7"] Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564227 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerName="mysql-bootstrap" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerName="mysql-bootstrap" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564266 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564275 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564295 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="probe" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564308 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="probe" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564325 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564335 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564351 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564361 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564377 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564387 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564400 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" containerName="rabbitmq" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564410 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" containerName="rabbitmq" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564446 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382166c8-355e-407b-9721-3eee34966095" containerName="keystone-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564458 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="382166c8-355e-407b-9721-3eee34966095" containerName="keystone-api" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564468 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564490 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564507 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" containerName="memcached" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564516 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" containerName="memcached" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564534 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" containerName="setup-container" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564543 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" containerName="setup-container" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564565 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564575 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564589 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-notification-agent" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564599 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-notification-agent" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564609 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-metadata" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564619 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-metadata" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564635 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564645 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564657 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564667 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564678 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564690 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-api" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564702 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="proxy-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564712 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="proxy-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564731 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="openstack-network-exporter" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564741 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="openstack-network-exporter" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564763 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9211a2d8-8917-464d-a790-efc469302556" containerName="nova-scheduler-scheduler" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564773 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9211a2d8-8917-464d-a790-efc469302556" containerName="nova-scheduler-scheduler" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564792 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="ovn-northd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564801 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="ovn-northd" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564812 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="setup-container" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564821 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="setup-container" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564834 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564844 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564885 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564896 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564910 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564919 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564932 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerName="mysql-bootstrap" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564942 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerName="mysql-bootstrap" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564958 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564968 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.564984 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.564994 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-api" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565004 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565013 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565030 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565039 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-api" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565050 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-central-agent" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565059 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-central-agent" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565076 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerName="galera" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565085 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerName="galera" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565100 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565111 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565128 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="sg-core" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565137 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="sg-core" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565154 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="cinder-scheduler" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565162 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="cinder-scheduler" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565175 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565184 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565200 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565210 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-log" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565224 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerName="galera" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565233 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerName="galera" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565253 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26816cde-a8b6-41a2-ab12-46f8aeebbb0d" containerName="kube-state-metrics" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565263 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="26816cde-a8b6-41a2-ab12-46f8aeebbb0d" containerName="kube-state-metrics" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565275 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565285 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565300 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26363be-cfa7-49f5-82a2-709c67b44622" containerName="nova-cell0-conductor-conductor" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565311 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26363be-cfa7-49f5-82a2-709c67b44622" containerName="nova-cell0-conductor-conductor" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565324 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="rabbitmq" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565334 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="rabbitmq" Feb 19 13:34:19 crc kubenswrapper[4861]: E0219 13:34:19.565352 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" containerName="nova-cell1-conductor-conductor" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565363 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" containerName="nova-cell1-conductor-conductor" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565578 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c881f3a1-3450-4ca9-8e8a-1c3d67e46770" containerName="galera" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565597 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="ovn-northd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565616 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565632 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565644 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-central-agent" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565657 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ee6ab7-feb7-4dbd-881a-b8250652aef9" containerName="openstack-network-exporter" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565667 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565684 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="27cfe279-5bf2-4ea6-9fb3-cf1fcb1f8245" containerName="nova-cell1-conductor-conductor" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565697 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565715 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565727 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce14944-29de-44e7-9ad4-bb056cc6d656" containerName="glance-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565744 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9211a2d8-8917-464d-a790-efc469302556" containerName="nova-scheduler-scheduler" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565759 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="97eefa3e-8d45-46c5-bfa6-150d0255a15b" containerName="ovn-controller" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565770 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26363be-cfa7-49f5-82a2-709c67b44622" containerName="nova-cell0-conductor-conductor" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565784 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e9ae58-534e-4312-8b56-9ec6708995ac" containerName="galera" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565795 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdaa2d03-6ae0-405a-af42-499d99ec711d" containerName="neutron-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565807 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="cinder-scheduler" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565817 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565833 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="382166c8-355e-407b-9721-3eee34966095" containerName="keystone-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565849 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565865 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4307ff9-78bb-48ec-8096-6e06ff22e19b" containerName="barbican-worker-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565883 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="707836a9-478e-4110-b5f5-9ee7e6b46e21" containerName="nova-api-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565895 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565910 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565923 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="07533556-6a9f-4844-be7d-f9c9cf8c53a4" containerName="nova-metadata-metadata" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565940 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe64a04b-1266-4b02-88e5-191f4a974422" containerName="rabbitmq" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565953 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="sg-core" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565967 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d0ac5c-1d20-4b80-be1b-21ad2641b215" containerName="placement-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565979 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b863561a-440f-4e92-a8f3-4786a24d0a5f" containerName="cinder-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.565990 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f079df7d-6aa6-4eab-8a9a-3b4bc329f139" containerName="memcached" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566004 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da21583-02a3-4a99-a05c-976f017fb31c" containerName="glance-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566018 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d002f91-22f1-4ebd-8bc9-04e81e4a00ef" containerName="barbican-keystone-listener-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566031 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="ceilometer-notification-agent" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566050 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="26816cde-a8b6-41a2-ab12-46f8aeebbb0d" containerName="kube-state-metrics" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566066 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e264a8-32df-4980-a6b8-eb1964d644b9" containerName="proxy-httpd" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566083 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api-log" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566097 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="074e719c-b46b-4f91-ae2d-e7f30368a8ae" containerName="barbican-api" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566110 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea376614-9f7c-4d27-aa0b-a0dba5c99a6a" containerName="probe" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.566128 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b117524a-eaad-4666-9e0e-bda909b2ad30" containerName="rabbitmq" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.567720 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.594909 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmq7"] Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.709824 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-catalog-content\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.710104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-utilities\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.710263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnnn\" (UniqueName: \"kubernetes.io/projected/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-kube-api-access-kdnnn\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.811318 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-utilities\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.811399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnnn\" (UniqueName: \"kubernetes.io/projected/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-kube-api-access-kdnnn\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.811447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-catalog-content\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.811979 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-catalog-content\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.812187 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-utilities\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.840084 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnnn\" (UniqueName: \"kubernetes.io/projected/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-kube-api-access-kdnnn\") pod \"redhat-marketplace-hhmq7\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:19 crc kubenswrapper[4861]: I0219 13:34:19.897922 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:20 crc kubenswrapper[4861]: I0219 13:34:20.340577 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmq7"] Feb 19 13:34:20 crc kubenswrapper[4861]: W0219 13:34:20.348672 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3f3d0d_8365_4e5f_9a7f_9c6430646297.slice/crio-559d268086629857bf8a0d3f3ac1de3eb965c0689c3f251e529ba2e6bfcb2b01 WatchSource:0}: Error finding container 559d268086629857bf8a0d3f3ac1de3eb965c0689c3f251e529ba2e6bfcb2b01: Status 404 returned error can't find the container with id 559d268086629857bf8a0d3f3ac1de3eb965c0689c3f251e529ba2e6bfcb2b01 Feb 19 13:34:20 crc kubenswrapper[4861]: I0219 13:34:20.902254 4861 generic.go:334] "Generic (PLEG): container finished" podID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerID="8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561" exitCode=0 Feb 19 13:34:20 crc kubenswrapper[4861]: I0219 13:34:20.902644 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerDied","Data":"8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561"} Feb 19 13:34:20 crc kubenswrapper[4861]: I0219 13:34:20.902688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerStarted","Data":"559d268086629857bf8a0d3f3ac1de3eb965c0689c3f251e529ba2e6bfcb2b01"} Feb 19 13:34:21 crc kubenswrapper[4861]: I0219 13:34:21.919166 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerStarted","Data":"a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889"} Feb 19 13:34:22 crc kubenswrapper[4861]: I0219 13:34:22.934310 4861 generic.go:334] "Generic (PLEG): container finished" podID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerID="a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889" exitCode=0 Feb 19 13:34:22 crc kubenswrapper[4861]: I0219 13:34:22.934359 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerDied","Data":"a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889"} Feb 19 13:34:23 crc kubenswrapper[4861]: I0219 13:34:23.949290 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerStarted","Data":"ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5"} Feb 19 13:34:23 crc kubenswrapper[4861]: I0219 13:34:23.981578 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hhmq7" podStartSLOduration=2.5313902820000003 podStartE2EDuration="4.981544071s" podCreationTimestamp="2026-02-19 13:34:19 +0000 UTC" firstStartedPulling="2026-02-19 13:34:20.904309401 +0000 UTC m=+1475.565412639" lastFinishedPulling="2026-02-19 13:34:23.35446315 +0000 UTC m=+1478.015566428" observedRunningTime="2026-02-19 13:34:23.975438756 +0000 UTC m=+1478.636541994" watchObservedRunningTime="2026-02-19 13:34:23.981544071 +0000 UTC m=+1478.642647339" Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.393882 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.394945 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.395596 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.395689 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.396875 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.398507 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.400291 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:24 crc kubenswrapper[4861]: E0219 13:34:24.400376 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.394338 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.395024 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.395506 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.395545 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.395671 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.397922 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.399483 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 13:34:29 crc kubenswrapper[4861]: E0219 13:34:29.399513 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-d4skq" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:29 crc kubenswrapper[4861]: I0219 13:34:29.898438 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:29 crc kubenswrapper[4861]: I0219 13:34:29.898515 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:30 crc kubenswrapper[4861]: I0219 13:34:30.001560 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:30 crc kubenswrapper[4861]: I0219 13:34:30.105901 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:30 crc kubenswrapper[4861]: I0219 13:34:30.247226 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmq7"] Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.071731 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hhmq7" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="registry-server" containerID="cri-o://ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5" gracePeriod=2 Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.606082 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.729979 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdnnn\" (UniqueName: \"kubernetes.io/projected/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-kube-api-access-kdnnn\") pod \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.730043 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-catalog-content\") pod \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.730168 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-utilities\") pod \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\" (UID: \"3b3f3d0d-8365-4e5f-9a7f-9c6430646297\") " Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.732066 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-utilities" (OuterVolumeSpecName: "utilities") pod "3b3f3d0d-8365-4e5f-9a7f-9c6430646297" (UID: "3b3f3d0d-8365-4e5f-9a7f-9c6430646297"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.737023 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-kube-api-access-kdnnn" (OuterVolumeSpecName: "kube-api-access-kdnnn") pod "3b3f3d0d-8365-4e5f-9a7f-9c6430646297" (UID: "3b3f3d0d-8365-4e5f-9a7f-9c6430646297"). InnerVolumeSpecName "kube-api-access-kdnnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.772523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b3f3d0d-8365-4e5f-9a7f-9c6430646297" (UID: "3b3f3d0d-8365-4e5f-9a7f-9c6430646297"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.831895 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.831936 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdnnn\" (UniqueName: \"kubernetes.io/projected/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-kube-api-access-kdnnn\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:32 crc kubenswrapper[4861]: I0219 13:34:32.831949 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3f3d0d-8365-4e5f-9a7f-9c6430646297-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.092606 4861 generic.go:334] "Generic (PLEG): container finished" podID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerID="ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5" exitCode=0 Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.092694 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmq7" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.092683 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerDied","Data":"ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5"} Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.092804 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmq7" event={"ID":"3b3f3d0d-8365-4e5f-9a7f-9c6430646297","Type":"ContainerDied","Data":"559d268086629857bf8a0d3f3ac1de3eb965c0689c3f251e529ba2e6bfcb2b01"} Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.092851 4861 scope.go:117] "RemoveContainer" containerID="ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.141894 4861 scope.go:117] "RemoveContainer" containerID="a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.154632 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmq7"] Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.161431 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmq7"] Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.183801 4861 scope.go:117] "RemoveContainer" containerID="8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.219637 4861 scope.go:117] "RemoveContainer" containerID="ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5" Feb 19 13:34:33 crc kubenswrapper[4861]: E0219 13:34:33.220451 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5\": container with ID starting with ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5 not found: ID does not exist" containerID="ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.220507 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5"} err="failed to get container status \"ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5\": rpc error: code = NotFound desc = could not find container \"ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5\": container with ID starting with ca1e925b6a711f4a494a23256cc147caa00128505e8caac06bbfc13cddcdc3d5 not found: ID does not exist" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.220543 4861 scope.go:117] "RemoveContainer" containerID="a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889" Feb 19 13:34:33 crc kubenswrapper[4861]: E0219 13:34:33.221189 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889\": container with ID starting with a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889 not found: ID does not exist" containerID="a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.221258 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889"} err="failed to get container status \"a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889\": rpc error: code = NotFound desc = could not find container \"a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889\": container with ID starting with a8eec9ec5c2c2468edb560c11a01385523beda95b9d7f7caaa90ebe76d29c889 not found: ID does not exist" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.221297 4861 scope.go:117] "RemoveContainer" containerID="8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561" Feb 19 13:34:33 crc kubenswrapper[4861]: E0219 13:34:33.221827 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561\": container with ID starting with 8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561 not found: ID does not exist" containerID="8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.221892 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561"} err="failed to get container status \"8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561\": rpc error: code = NotFound desc = could not find container \"8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561\": container with ID starting with 8faeff85920f34a7ad70d3602d2a629cc9d487cd44b5fbee9aa4c708caff3561 not found: ID does not exist" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.835002 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.835555 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.835702 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.837091 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a10bd53a42d4b75d132d094be46db575b69579a23570e97c0f7e4e90137176e"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.837242 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://8a10bd53a42d4b75d132d094be46db575b69579a23570e97c0f7e4e90137176e" gracePeriod=600 Feb 19 13:34:33 crc kubenswrapper[4861]: I0219 13:34:33.987940 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" path="/var/lib/kubelet/pods/3b3f3d0d-8365-4e5f-9a7f-9c6430646297/volumes" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.109414 4861 generic.go:334] "Generic (PLEG): container finished" podID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerID="0ccff9712c241358663a5e8a3f82a05a5ae9907961c0054adefa2af45c1b18a1" exitCode=137 Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.109446 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"0ccff9712c241358663a5e8a3f82a05a5ae9907961c0054adefa2af45c1b18a1"} Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.111855 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d4skq_c45bf5fa-a71c-4221-89a9-9c4965821c63/ovs-vswitchd/0.log" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.112605 4861 generic.go:334] "Generic (PLEG): container finished" podID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" exitCode=137 Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.112675 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerDied","Data":"2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a"} Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.115031 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="8a10bd53a42d4b75d132d094be46db575b69579a23570e97c0f7e4e90137176e" exitCode=0 Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.115062 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"8a10bd53a42d4b75d132d094be46db575b69579a23570e97c0f7e4e90137176e"} Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.115086 4861 scope.go:117] "RemoveContainer" containerID="a8231b7d6cc8b5ea6124bfdb8ee2cfd7fd221648893a967e4427c88c18dc3ef9" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.369744 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d4skq_c45bf5fa-a71c-4221-89a9-9c4965821c63/ovs-vswitchd/0.log" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.371103 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470590 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-lib\") pod \"c45bf5fa-a71c-4221-89a9-9c4965821c63\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470650 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-run\") pod \"c45bf5fa-a71c-4221-89a9-9c4965821c63\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470686 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-etc-ovs\") pod \"c45bf5fa-a71c-4221-89a9-9c4965821c63\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470718 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-log\") pod \"c45bf5fa-a71c-4221-89a9-9c4965821c63\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470672 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-lib" (OuterVolumeSpecName: "var-lib") pod "c45bf5fa-a71c-4221-89a9-9c4965821c63" (UID: "c45bf5fa-a71c-4221-89a9-9c4965821c63"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470709 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-run" (OuterVolumeSpecName: "var-run") pod "c45bf5fa-a71c-4221-89a9-9c4965821c63" (UID: "c45bf5fa-a71c-4221-89a9-9c4965821c63"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470733 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "c45bf5fa-a71c-4221-89a9-9c4965821c63" (UID: "c45bf5fa-a71c-4221-89a9-9c4965821c63"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470762 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxpf\" (UniqueName: \"kubernetes.io/projected/c45bf5fa-a71c-4221-89a9-9c4965821c63-kube-api-access-flxpf\") pod \"c45bf5fa-a71c-4221-89a9-9c4965821c63\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.470952 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45bf5fa-a71c-4221-89a9-9c4965821c63-scripts\") pod \"c45bf5fa-a71c-4221-89a9-9c4965821c63\" (UID: \"c45bf5fa-a71c-4221-89a9-9c4965821c63\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.471557 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-log" (OuterVolumeSpecName: "var-log") pod "c45bf5fa-a71c-4221-89a9-9c4965821c63" (UID: "c45bf5fa-a71c-4221-89a9-9c4965821c63"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.471922 4861 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-lib\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.471938 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.471948 4861 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.471956 4861 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c45bf5fa-a71c-4221-89a9-9c4965821c63-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.472770 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45bf5fa-a71c-4221-89a9-9c4965821c63-scripts" (OuterVolumeSpecName: "scripts") pod "c45bf5fa-a71c-4221-89a9-9c4965821c63" (UID: "c45bf5fa-a71c-4221-89a9-9c4965821c63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.478338 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45bf5fa-a71c-4221-89a9-9c4965821c63-kube-api-access-flxpf" (OuterVolumeSpecName: "kube-api-access-flxpf") pod "c45bf5fa-a71c-4221-89a9-9c4965821c63" (UID: "c45bf5fa-a71c-4221-89a9-9c4965821c63"). InnerVolumeSpecName "kube-api-access-flxpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.521449 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.573254 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxpf\" (UniqueName: \"kubernetes.io/projected/c45bf5fa-a71c-4221-89a9-9c4965821c63-kube-api-access-flxpf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.573300 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45bf5fa-a71c-4221-89a9-9c4965821c63-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.673809 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-combined-ca-bundle\") pod \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.673886 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-lock\") pod \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.673920 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lnf\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-kube-api-access-f2lnf\") pod \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.674033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-cache\") pod \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.674086 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") pod \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.674116 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\" (UID: \"f7c9197d-43d5-4c72-a7c3-c2e435368dd2\") " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.674954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-cache" (OuterVolumeSpecName: "cache") pod "f7c9197d-43d5-4c72-a7c3-c2e435368dd2" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.675132 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-lock" (OuterVolumeSpecName: "lock") pod "f7c9197d-43d5-4c72-a7c3-c2e435368dd2" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.679309 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f7c9197d-43d5-4c72-a7c3-c2e435368dd2" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.679784 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "f7c9197d-43d5-4c72-a7c3-c2e435368dd2" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.683166 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-kube-api-access-f2lnf" (OuterVolumeSpecName: "kube-api-access-f2lnf") pod "f7c9197d-43d5-4c72-a7c3-c2e435368dd2" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2"). InnerVolumeSpecName "kube-api-access-f2lnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.776020 4861 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-cache\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.776072 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.776134 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.776153 4861 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-lock\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.776172 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lnf\" (UniqueName: \"kubernetes.io/projected/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-kube-api-access-f2lnf\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.803237 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.878451 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:34 crc kubenswrapper[4861]: I0219 13:34:34.989928 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7c9197d-43d5-4c72-a7c3-c2e435368dd2" (UID: "f7c9197d-43d5-4c72-a7c3-c2e435368dd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.083036 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9197d-43d5-4c72-a7c3-c2e435368dd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.131167 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d4skq_c45bf5fa-a71c-4221-89a9-9c4965821c63/ovs-vswitchd/0.log" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.132413 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d4skq" event={"ID":"c45bf5fa-a71c-4221-89a9-9c4965821c63","Type":"ContainerDied","Data":"9c13e4e820ac547e4e25564c211260e299ba982a1a0f89ed70226daf4a58ecdc"} Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.132507 4861 scope.go:117] "RemoveContainer" containerID="2e0a3d8a266b39b1b104e8f5f77eb2f154403b01173a2de09f73d89c1d7c3a1a" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.132780 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d4skq" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.140033 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213"} Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.149731 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f7c9197d-43d5-4c72-a7c3-c2e435368dd2","Type":"ContainerDied","Data":"7c52d99b6dd419fa019b2df2b1af0034144f70d00885bb24457b4eb0e9a8ca04"} Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.149877 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.166392 4861 scope.go:117] "RemoveContainer" containerID="a7ccbfc0111fbdb3d2fe739494d86edbd04ad66c571fd16c655c39c1165de358" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.211165 4861 scope.go:117] "RemoveContainer" containerID="0eefac32fae3e0435064673967ea6026f7e8fe88c872f6e36851a8fcecdf988a" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.215369 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-d4skq"] Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.226923 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-d4skq"] Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.235082 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.238453 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.246978 4861 scope.go:117] "RemoveContainer" containerID="0ccff9712c241358663a5e8a3f82a05a5ae9907961c0054adefa2af45c1b18a1" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.268518 4861 scope.go:117] "RemoveContainer" containerID="e8e2594239d50333d43b08ef764dd15a54e631448517f1f5b7fde345bc50b2f2" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.285024 4861 scope.go:117] "RemoveContainer" containerID="ef03fa4021e6c6150fbd214140c01f05e06bcc41b0e5602e90af2b70524e58cb" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.302774 4861 scope.go:117] "RemoveContainer" containerID="6c5b796933349019a3e6caaca60e24876d6caee6a8db308216a144cc2b4550b5" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.364577 4861 scope.go:117] "RemoveContainer" containerID="da5f6285aca4973ac5f8147c034649ed6d304113cb58cb64d2cca749a0aa466b" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.393826 4861 scope.go:117] "RemoveContainer" containerID="bb2e6ac221defcb0c3b930773236fdbf8bc57fd77635bc49c98f98a881dddf14" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.421550 4861 scope.go:117] "RemoveContainer" containerID="5695ae030dccbdeb118599a52129cc9b7894cfbff564817156a7fdbf305aa0f0" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.446350 4861 scope.go:117] "RemoveContainer" containerID="471372c27c39aaaf08c9ce1b7cd61b51c8ece5bab05fb8d039a85d6af20abc96" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.474782 4861 scope.go:117] "RemoveContainer" containerID="b01b12c10d17a6ede4967ef04b12864fea0898dee251a363634450764aacdd72" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.505873 4861 scope.go:117] "RemoveContainer" containerID="f9f19bdef3fa838ce4b4f8189100aaf397a995f17cf865eefb4002eeec03180e" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.530155 4861 scope.go:117] "RemoveContainer" containerID="65e1cc2bfc85c23034b910f4d14189e38743412528a40dc9a26a2ca6c1041afb" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.548284 4861 scope.go:117] "RemoveContainer" containerID="70048cfeceaa1bd3b11260d15e755776cfbd5fd6d7ef0d9d90e3d8c6f4261932" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.574665 4861 scope.go:117] "RemoveContainer" containerID="39d17545ee3cebeec079609c07c099110ffc65d4ddb3e462d92e98a8b967e616" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.612385 4861 scope.go:117] "RemoveContainer" containerID="74434874608f684b61cedbd97fbdb2a90894f4bed2db7e66ba332e8b322c05b2" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.639451 4861 scope.go:117] "RemoveContainer" containerID="2923944d19b38687e829c0ad91d45d3fa58c574c1be4562c0a63aa47c65877b8" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.990636 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" path="/var/lib/kubelet/pods/c45bf5fa-a71c-4221-89a9-9c4965821c63/volumes" Feb 19 13:34:35 crc kubenswrapper[4861]: I0219 13:34:35.992102 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" path="/var/lib/kubelet/pods/f7c9197d-43d5-4c72-a7c3-c2e435368dd2/volumes" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.643884 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvfbl"] Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644758 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="swift-recon-cron" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644782 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="swift-recon-cron" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644805 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644816 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644840 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="extract-content" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644851 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="extract-content" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644863 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-updater" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644872 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-updater" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644884 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644893 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-server" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644917 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644926 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-server" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644939 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="extract-utilities" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644948 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="extract-utilities" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644960 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="rsync" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644969 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="rsync" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.644987 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="registry-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.644996 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="registry-server" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645007 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645017 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645034 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645043 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645068 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645081 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server-init" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645092 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server-init" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645110 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645119 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-server" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645138 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645147 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645165 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645174 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645188 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-reaper" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645196 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-reaper" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645218 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645228 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645240 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645265 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-updater" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645277 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-updater" Feb 19 13:34:37 crc kubenswrapper[4861]: E0219 13:34:37.645290 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-expirer" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645299 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-expirer" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645518 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645535 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645562 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovs-vswitchd" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645579 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645589 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45bf5fa-a71c-4221-89a9-9c4965821c63" containerName="ovsdb-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645604 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="swift-recon-cron" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645620 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3f3d0d-8365-4e5f-9a7f-9c6430646297" containerName="registry-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645635 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645647 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645662 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="rsync" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645682 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-replicator" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645695 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-updater" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645708 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-reaper" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645724 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645741 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-server" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645757 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="container-updater" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645773 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="account-auditor" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.645787 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9197d-43d5-4c72-a7c3-c2e435368dd2" containerName="object-expirer" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.647618 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.656655 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvfbl"] Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.741777 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-catalog-content\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.741858 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-utilities\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.742093 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrvm\" (UniqueName: \"kubernetes.io/projected/29752677-687c-4937-b098-65c2f5a16dff-kube-api-access-jlrvm\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.843730 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-catalog-content\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.843828 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-utilities\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.843917 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrvm\" (UniqueName: \"kubernetes.io/projected/29752677-687c-4937-b098-65c2f5a16dff-kube-api-access-jlrvm\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.844275 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-catalog-content\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.844569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-utilities\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:37 crc kubenswrapper[4861]: I0219 13:34:37.862163 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrvm\" (UniqueName: \"kubernetes.io/projected/29752677-687c-4937-b098-65c2f5a16dff-kube-api-access-jlrvm\") pod \"community-operators-hvfbl\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:38 crc kubenswrapper[4861]: I0219 13:34:38.028560 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:38 crc kubenswrapper[4861]: I0219 13:34:38.510061 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvfbl"] Feb 19 13:34:39 crc kubenswrapper[4861]: I0219 13:34:39.216564 4861 generic.go:334] "Generic (PLEG): container finished" podID="29752677-687c-4937-b098-65c2f5a16dff" containerID="86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8" exitCode=0 Feb 19 13:34:39 crc kubenswrapper[4861]: I0219 13:34:39.216652 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerDied","Data":"86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8"} Feb 19 13:34:39 crc kubenswrapper[4861]: I0219 13:34:39.216726 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerStarted","Data":"53e7d8bf077622a6a16cbc64e22240e500f89ce17b02f35624c1f19d9bc64517"} Feb 19 13:34:40 crc kubenswrapper[4861]: I0219 13:34:40.228921 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerStarted","Data":"576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088"} Feb 19 13:34:41 crc kubenswrapper[4861]: I0219 13:34:41.244191 4861 generic.go:334] "Generic (PLEG): container finished" podID="29752677-687c-4937-b098-65c2f5a16dff" containerID="576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088" exitCode=0 Feb 19 13:34:41 crc kubenswrapper[4861]: I0219 13:34:41.244251 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerDied","Data":"576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088"} Feb 19 13:34:42 crc kubenswrapper[4861]: I0219 13:34:42.260350 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerStarted","Data":"90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e"} Feb 19 13:34:42 crc kubenswrapper[4861]: I0219 13:34:42.293605 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvfbl" podStartSLOduration=2.888755734 podStartE2EDuration="5.293570679s" podCreationTimestamp="2026-02-19 13:34:37 +0000 UTC" firstStartedPulling="2026-02-19 13:34:39.219491835 +0000 UTC m=+1493.880595093" lastFinishedPulling="2026-02-19 13:34:41.62430681 +0000 UTC m=+1496.285410038" observedRunningTime="2026-02-19 13:34:42.291640988 +0000 UTC m=+1496.952744246" watchObservedRunningTime="2026-02-19 13:34:42.293570679 +0000 UTC m=+1496.954673937" Feb 19 13:34:48 crc kubenswrapper[4861]: I0219 13:34:48.029245 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:48 crc kubenswrapper[4861]: I0219 13:34:48.030729 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:48 crc kubenswrapper[4861]: I0219 13:34:48.120066 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:48 crc kubenswrapper[4861]: I0219 13:34:48.396739 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:48 crc kubenswrapper[4861]: I0219 13:34:48.454746 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvfbl"] Feb 19 13:34:50 crc kubenswrapper[4861]: I0219 13:34:50.367821 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvfbl" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="registry-server" containerID="cri-o://90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e" gracePeriod=2 Feb 19 13:34:50 crc kubenswrapper[4861]: I0219 13:34:50.890388 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.076980 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlrvm\" (UniqueName: \"kubernetes.io/projected/29752677-687c-4937-b098-65c2f5a16dff-kube-api-access-jlrvm\") pod \"29752677-687c-4937-b098-65c2f5a16dff\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.077041 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-catalog-content\") pod \"29752677-687c-4937-b098-65c2f5a16dff\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.077076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-utilities\") pod \"29752677-687c-4937-b098-65c2f5a16dff\" (UID: \"29752677-687c-4937-b098-65c2f5a16dff\") " Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.078179 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-utilities" (OuterVolumeSpecName: "utilities") pod "29752677-687c-4937-b098-65c2f5a16dff" (UID: "29752677-687c-4937-b098-65c2f5a16dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.086671 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29752677-687c-4937-b098-65c2f5a16dff-kube-api-access-jlrvm" (OuterVolumeSpecName: "kube-api-access-jlrvm") pod "29752677-687c-4937-b098-65c2f5a16dff" (UID: "29752677-687c-4937-b098-65c2f5a16dff"). InnerVolumeSpecName "kube-api-access-jlrvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.138705 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29752677-687c-4937-b098-65c2f5a16dff" (UID: "29752677-687c-4937-b098-65c2f5a16dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.179077 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlrvm\" (UniqueName: \"kubernetes.io/projected/29752677-687c-4937-b098-65c2f5a16dff-kube-api-access-jlrvm\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.179139 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.179158 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29752677-687c-4937-b098-65c2f5a16dff-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.379073 4861 generic.go:334] "Generic (PLEG): container finished" podID="29752677-687c-4937-b098-65c2f5a16dff" containerID="90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e" exitCode=0 Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.379121 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerDied","Data":"90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e"} Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.379152 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvfbl" event={"ID":"29752677-687c-4937-b098-65c2f5a16dff","Type":"ContainerDied","Data":"53e7d8bf077622a6a16cbc64e22240e500f89ce17b02f35624c1f19d9bc64517"} Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.379172 4861 scope.go:117] "RemoveContainer" containerID="90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.379165 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvfbl" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.418714 4861 scope.go:117] "RemoveContainer" containerID="576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.433333 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvfbl"] Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.446922 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvfbl"] Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.473908 4861 scope.go:117] "RemoveContainer" containerID="86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.503961 4861 scope.go:117] "RemoveContainer" containerID="90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e" Feb 19 13:34:51 crc kubenswrapper[4861]: E0219 13:34:51.504873 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e\": container with ID starting with 90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e not found: ID does not exist" containerID="90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.505025 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e"} err="failed to get container status \"90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e\": rpc error: code = NotFound desc = could not find container \"90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e\": container with ID starting with 90cec1c39fdc016fd62bc6d7dab0763532194ca74537cad469ad69808921351e not found: ID does not exist" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.505097 4861 scope.go:117] "RemoveContainer" containerID="576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088" Feb 19 13:34:51 crc kubenswrapper[4861]: E0219 13:34:51.506113 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088\": container with ID starting with 576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088 not found: ID does not exist" containerID="576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.506218 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088"} err="failed to get container status \"576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088\": rpc error: code = NotFound desc = could not find container \"576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088\": container with ID starting with 576b64a85f26181696e435bb9c767c9ed2adcf06777a4c1cca0b919cb4fe6088 not found: ID does not exist" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.506298 4861 scope.go:117] "RemoveContainer" containerID="86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8" Feb 19 13:34:51 crc kubenswrapper[4861]: E0219 13:34:51.507567 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8\": container with ID starting with 86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8 not found: ID does not exist" containerID="86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.507659 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8"} err="failed to get container status \"86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8\": rpc error: code = NotFound desc = could not find container \"86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8\": container with ID starting with 86a9042d362246f5c2e5233984925b1fb3f06c6a0f3a67fe24d51a78afcd09e8 not found: ID does not exist" Feb 19 13:34:51 crc kubenswrapper[4861]: I0219 13:34:51.993857 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29752677-687c-4937-b098-65c2f5a16dff" path="/var/lib/kubelet/pods/29752677-687c-4937-b098-65c2f5a16dff/volumes" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.755047 4861 scope.go:117] "RemoveContainer" containerID="fe0ee9914a42fe0554e85923d4a0281bffd6e0ce646068958a11a4c9324dbe38" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.792164 4861 scope.go:117] "RemoveContainer" containerID="ad8511fcc645d0be3764a7766fb81bdc9a4303a984191f45091f54cbd03e02b4" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.813985 4861 scope.go:117] "RemoveContainer" containerID="3235b8e7854789d5e2993065dba25d19975febab696929922f11635bc158c664" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.844059 4861 scope.go:117] "RemoveContainer" containerID="0285cd049de2db17e1e6886951ab9077d608a79a2d4f313cbf4d2ecae7759cb4" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.887725 4861 scope.go:117] "RemoveContainer" containerID="d5cb0990a30fcd764ec9c5d555309b9cd846903ae77c9a686d62371226c13062" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.905371 4861 scope.go:117] "RemoveContainer" containerID="5f4ea76f6f5a358df34a0fc0c42414fb4768cb3d35df370d01f4bb513bb89a5d" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.933330 4861 scope.go:117] "RemoveContainer" containerID="93d6530faa073831764d3b6a6e2012de0e7a58e9406a03d16933dc2ce80273b1" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.956179 4861 scope.go:117] "RemoveContainer" containerID="e16fc260e093b17e520243b93e5fe14feac73536fef57e9588a174ac261332d7" Feb 19 13:35:56 crc kubenswrapper[4861]: I0219 13:35:56.985505 4861 scope.go:117] "RemoveContainer" containerID="830e4d061e872db1e926fe1d01b80647ede540da98625dd3926876c3c4352b59" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.017779 4861 scope.go:117] "RemoveContainer" containerID="790dd63b441340ac26f9e12250b75100a8f825b05c2e22a06e124c3660ffb802" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.041276 4861 scope.go:117] "RemoveContainer" containerID="4ec70b45048d50715b61c4fb177ea92e9d73fbffe8837d09a60dcadb648edd85" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.066736 4861 scope.go:117] "RemoveContainer" containerID="98364a3619cc3e7bfa6596a941af0e9dc1f03e998f4534eef9b37681f5bf9324" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.103777 4861 scope.go:117] "RemoveContainer" containerID="ccf27b55dfd42fdbc55ee84ec26baa141b951ad0cc757ea09a8845bbb6d89325" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.138044 4861 scope.go:117] "RemoveContainer" containerID="88d7111fe21c3687b4fb01aa926a610afbd0c54ea7680dec3a92d01c614abed4" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.159984 4861 scope.go:117] "RemoveContainer" containerID="9b2efdec34339881eb22ca3a3a2436a224dbef6c449beeb02a6654e5cd184326" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.184127 4861 scope.go:117] "RemoveContainer" containerID="295a17d0f366d47e60a15aeae0d2bc62d87b6fb0521772cb5ddf7947d3727a19" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.204399 4861 scope.go:117] "RemoveContainer" containerID="a4018bf193243f2b017e4cbd10a8187b15394d7ed7645d11eae387d69ed13523" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.233733 4861 scope.go:117] "RemoveContainer" containerID="cb68315335170486bebf59381c3d2b0420807df6d883092496cf318b7f08bed0" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.259941 4861 scope.go:117] "RemoveContainer" containerID="23cda0cad8af5ef4ce132ba46d2f07a162307eafe5c9cf3205b337ec990907ac" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.287243 4861 scope.go:117] "RemoveContainer" containerID="24ee240c37ffbbe11eb45f46ef225bda8ab9c0ff100538b0496518fae46d4be9" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.315597 4861 scope.go:117] "RemoveContainer" containerID="d9af03464962f01cce9fb93fd5fe4eb9ac5de0912a4678ebcade5e9fee209e26" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.344772 4861 scope.go:117] "RemoveContainer" containerID="83eeb62461cea6c158459185de1625c29a6f4905f906e0bfe7e66334571391f7" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.374089 4861 scope.go:117] "RemoveContainer" containerID="03c7958adbcf935e7fed370774cad0fe23d4461b437a22f750d78b8a60d51ddc" Feb 19 13:35:57 crc kubenswrapper[4861]: I0219 13:35:57.424450 4861 scope.go:117] "RemoveContainer" containerID="15062df0f332fc34ec11ca8c581d2e2d781dd074f3e93bfb91759c5f2c17cbb8" Feb 19 13:36:57 crc kubenswrapper[4861]: I0219 13:36:57.901839 4861 scope.go:117] "RemoveContainer" containerID="ab5edbd988a9350d5da5aa253135c1b177e87f0dfa94d8d0fc58f14a00fba7e0" Feb 19 13:36:57 crc kubenswrapper[4861]: I0219 13:36:57.964978 4861 scope.go:117] "RemoveContainer" containerID="17fcee271c2a499b801142f1f8bd906a26d2c54f3ca073b5f9002a5871100c7a" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.002556 4861 scope.go:117] "RemoveContainer" containerID="46e6d5cc6944a8d2d07aeb6ab0574a804c9017f54cd371274876c7a783498beb" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.032162 4861 scope.go:117] "RemoveContainer" containerID="a6147d4413d0fab04021a29f6c8ca99f658d6f9b5f9f258fb48c889b282281d7" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.057316 4861 scope.go:117] "RemoveContainer" containerID="70f31a21639362c5c9feca912543de51968f17736468113ba6cce7a5e966e4ae" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.116185 4861 scope.go:117] "RemoveContainer" containerID="544ce3ea2021ed17c32df42f201533600ccba00f725b626c9de59972338ccb90" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.160376 4861 scope.go:117] "RemoveContainer" containerID="776ff5a17e90ebc21d49721478bc41e7146bdd38de85dd86200078fc345273f3" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.184370 4861 scope.go:117] "RemoveContainer" containerID="2ebd568b5fee8d92b4bf41bd5662b65019c7fa35b6090be14d419e69dd312ad2" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.232647 4861 scope.go:117] "RemoveContainer" containerID="851c26a783d4f2fb239877063c2d4732d081998faf87a9a0897c6af79d389cda" Feb 19 13:36:58 crc kubenswrapper[4861]: I0219 13:36:58.258961 4861 scope.go:117] "RemoveContainer" containerID="8b91b15fff839ba5df5868715813fffaafecb640a519d3a5cb4a17ec2708a678" Feb 19 13:37:03 crc kubenswrapper[4861]: I0219 13:37:03.834814 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:37:03 crc kubenswrapper[4861]: I0219 13:37:03.835286 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:37:33 crc kubenswrapper[4861]: I0219 13:37:33.834800 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:37:33 crc kubenswrapper[4861]: I0219 13:37:33.835867 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:37:58 crc kubenswrapper[4861]: I0219 13:37:58.486410 4861 scope.go:117] "RemoveContainer" containerID="a508d171f352d010ce973346dc81c47126d3ac2c349161f151df000ba6cd1e90" Feb 19 13:37:58 crc kubenswrapper[4861]: I0219 13:37:58.516508 4861 scope.go:117] "RemoveContainer" containerID="78fdee9081121e0ab0ba03328fa869438307e3d0a9114e86bc894de3b9889286" Feb 19 13:37:58 crc kubenswrapper[4861]: I0219 13:37:58.956522 4861 scope.go:117] "RemoveContainer" containerID="d730aafec31ebf1d1d4d0bbbdd71e711bc2fd55423001647b8861204d4936465" Feb 19 13:37:58 crc kubenswrapper[4861]: I0219 13:37:58.978615 4861 scope.go:117] "RemoveContainer" containerID="0c061181cb17f27f6ef8b1302359d02e2616c5bc5016fb409d477a8a73115613" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.004181 4861 scope.go:117] "RemoveContainer" containerID="82ceb2b562584835e768713baafa8271c5b94b45905d32c6aecbc0eec99445bc" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.111387 4861 scope.go:117] "RemoveContainer" containerID="8354b2098a238ae5e4107f91ec8969097ba6d956d629a96860766ef3f619f4ab" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.139277 4861 scope.go:117] "RemoveContainer" containerID="e7d391aacd8499dedae619c2280ad3ec2ac0938d27356e8042553e2777b7c08a" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.162282 4861 scope.go:117] "RemoveContainer" containerID="6379d90fb7d2e327bbfc0bbe51c70ca3c8fdf3884260c546325c64753da97180" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.185237 4861 scope.go:117] "RemoveContainer" containerID="cf188110f03d910f2a512942393ddfa01853575e4682c8b6c95037df3b2b616f" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.206822 4861 scope.go:117] "RemoveContainer" containerID="08f5ede146101abfdbe72fa01b651ee0b64dd6fc80f2a9cb3fa76ff9918744f3" Feb 19 13:37:59 crc kubenswrapper[4861]: I0219 13:37:59.227157 4861 scope.go:117] "RemoveContainer" containerID="1a80821a8e4670f6f32d88965fc76093208185ba4852a863d5ea299f7223e873" Feb 19 13:38:03 crc kubenswrapper[4861]: I0219 13:38:03.835084 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:38:03 crc kubenswrapper[4861]: I0219 13:38:03.835683 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:38:03 crc kubenswrapper[4861]: I0219 13:38:03.835751 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:38:03 crc kubenswrapper[4861]: I0219 13:38:03.837229 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:38:03 crc kubenswrapper[4861]: I0219 13:38:03.837319 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" gracePeriod=600 Feb 19 13:38:03 crc kubenswrapper[4861]: E0219 13:38:03.991187 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:38:04 crc kubenswrapper[4861]: I0219 13:38:04.430451 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" exitCode=0 Feb 19 13:38:04 crc kubenswrapper[4861]: I0219 13:38:04.430503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213"} Feb 19 13:38:04 crc kubenswrapper[4861]: I0219 13:38:04.430563 4861 scope.go:117] "RemoveContainer" containerID="8a10bd53a42d4b75d132d094be46db575b69579a23570e97c0f7e4e90137176e" Feb 19 13:38:04 crc kubenswrapper[4861]: I0219 13:38:04.431154 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:38:04 crc kubenswrapper[4861]: E0219 13:38:04.431456 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:38:15 crc kubenswrapper[4861]: I0219 13:38:15.983925 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:38:15 crc kubenswrapper[4861]: E0219 13:38:15.985071 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:38:30 crc kubenswrapper[4861]: I0219 13:38:30.978542 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:38:30 crc kubenswrapper[4861]: E0219 13:38:30.979355 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:38:42 crc kubenswrapper[4861]: I0219 13:38:42.977612 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:38:42 crc kubenswrapper[4861]: E0219 13:38:42.979943 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:38:54 crc kubenswrapper[4861]: I0219 13:38:54.977178 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:38:54 crc kubenswrapper[4861]: E0219 13:38:54.978256 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:38:59 crc kubenswrapper[4861]: I0219 13:38:59.426194 4861 scope.go:117] "RemoveContainer" containerID="51e69d4688e12a168b13767859d4e6da331e3a50228ed704c968b8fd8f39c623" Feb 19 13:38:59 crc kubenswrapper[4861]: I0219 13:38:59.678390 4861 scope.go:117] "RemoveContainer" containerID="bed581941d63c6a624659be5dcbcd47fff7f7421027cf26d8792299321714174" Feb 19 13:38:59 crc kubenswrapper[4861]: I0219 13:38:59.705263 4861 scope.go:117] "RemoveContainer" containerID="4dc3b378113db0a85067b6aacfc5cca8ae80446fc65f308cc2cd9c792fc8ef5d" Feb 19 13:38:59 crc kubenswrapper[4861]: I0219 13:38:59.753061 4861 scope.go:117] "RemoveContainer" containerID="571d42d778e6270495bc22f93e5351b8b9d48a25f59d4657b6f6c001aa52541f" Feb 19 13:38:59 crc kubenswrapper[4861]: I0219 13:38:59.777103 4861 scope.go:117] "RemoveContainer" containerID="1962e3d41ea810aad3ab4a831253aac921b27407d79269bdb28287cd3a3df113" Feb 19 13:39:08 crc kubenswrapper[4861]: I0219 13:39:08.977037 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:39:08 crc kubenswrapper[4861]: E0219 13:39:08.978935 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:39:23 crc kubenswrapper[4861]: I0219 13:39:23.977491 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:39:23 crc kubenswrapper[4861]: E0219 13:39:23.979537 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:39:34 crc kubenswrapper[4861]: I0219 13:39:34.978518 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:39:34 crc kubenswrapper[4861]: E0219 13:39:34.979579 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:39:46 crc kubenswrapper[4861]: I0219 13:39:46.976391 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:39:46 crc kubenswrapper[4861]: E0219 13:39:46.977169 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:39:59 crc kubenswrapper[4861]: I0219 13:39:59.866560 4861 scope.go:117] "RemoveContainer" containerID="54e7903255472247bd9622ba8ff4c16f8318f6b3c691e3d683ae6aab3638869a" Feb 19 13:39:59 crc kubenswrapper[4861]: I0219 13:39:59.921916 4861 scope.go:117] "RemoveContainer" containerID="fc2859ece05740938f3f6ba76783e5c0e1cd3f33027c89196eab75df118d742b" Feb 19 13:39:59 crc kubenswrapper[4861]: I0219 13:39:59.940945 4861 scope.go:117] "RemoveContainer" containerID="d751c5da783be93739c9cde1c6a879f363a6be171c0437955267e7fbe56355ab" Feb 19 13:39:59 crc kubenswrapper[4861]: I0219 13:39:59.978143 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:39:59 crc kubenswrapper[4861]: E0219 13:39:59.978535 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:40:13 crc kubenswrapper[4861]: I0219 13:40:13.977150 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:40:13 crc kubenswrapper[4861]: E0219 13:40:13.977877 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:40:25 crc kubenswrapper[4861]: I0219 13:40:25.985928 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:40:25 crc kubenswrapper[4861]: E0219 13:40:25.986856 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:40:39 crc kubenswrapper[4861]: I0219 13:40:39.978923 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:40:39 crc kubenswrapper[4861]: E0219 13:40:39.979880 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:40:54 crc kubenswrapper[4861]: I0219 13:40:54.976594 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:40:54 crc kubenswrapper[4861]: E0219 13:40:54.977242 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:41:06 crc kubenswrapper[4861]: I0219 13:41:06.977534 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:41:06 crc kubenswrapper[4861]: E0219 13:41:06.978347 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:41:17 crc kubenswrapper[4861]: I0219 13:41:17.977642 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:41:17 crc kubenswrapper[4861]: E0219 13:41:17.978640 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:41:31 crc kubenswrapper[4861]: I0219 13:41:31.976708 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:41:31 crc kubenswrapper[4861]: E0219 13:41:31.977414 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:41:43 crc kubenswrapper[4861]: I0219 13:41:43.977310 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:41:43 crc kubenswrapper[4861]: E0219 13:41:43.978510 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:41:56 crc kubenswrapper[4861]: I0219 13:41:56.977506 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:41:56 crc kubenswrapper[4861]: E0219 13:41:56.978233 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.758066 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sbf4c"] Feb 19 13:42:01 crc kubenswrapper[4861]: E0219 13:42:01.758623 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="extract-utilities" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.758637 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="extract-utilities" Feb 19 13:42:01 crc kubenswrapper[4861]: E0219 13:42:01.758660 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="registry-server" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.758667 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="registry-server" Feb 19 13:42:01 crc kubenswrapper[4861]: E0219 13:42:01.758676 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="extract-content" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.758682 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="extract-content" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.758809 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="29752677-687c-4937-b098-65c2f5a16dff" containerName="registry-server" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.759826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.791669 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbf4c"] Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.817173 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-utilities\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.817294 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-catalog-content\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.817540 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w89nr\" (UniqueName: \"kubernetes.io/projected/ef8f699f-eca4-4057-b3fe-2831a6e46d38-kube-api-access-w89nr\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.918842 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w89nr\" (UniqueName: \"kubernetes.io/projected/ef8f699f-eca4-4057-b3fe-2831a6e46d38-kube-api-access-w89nr\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.918954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-utilities\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.919005 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-catalog-content\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.919669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-utilities\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.919757 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-catalog-content\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:01 crc kubenswrapper[4861]: I0219 13:42:01.946704 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w89nr\" (UniqueName: \"kubernetes.io/projected/ef8f699f-eca4-4057-b3fe-2831a6e46d38-kube-api-access-w89nr\") pod \"certified-operators-sbf4c\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:02 crc kubenswrapper[4861]: I0219 13:42:02.096022 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:02 crc kubenswrapper[4861]: I0219 13:42:02.615746 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbf4c"] Feb 19 13:42:03 crc kubenswrapper[4861]: I0219 13:42:03.172862 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerID="b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52" exitCode=0 Feb 19 13:42:03 crc kubenswrapper[4861]: I0219 13:42:03.172930 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbf4c" event={"ID":"ef8f699f-eca4-4057-b3fe-2831a6e46d38","Type":"ContainerDied","Data":"b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52"} Feb 19 13:42:03 crc kubenswrapper[4861]: I0219 13:42:03.172968 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbf4c" event={"ID":"ef8f699f-eca4-4057-b3fe-2831a6e46d38","Type":"ContainerStarted","Data":"96085b447f47a5951cb2f28a776fb043fd86f8a08c332b47c8fa864dfff86ba2"} Feb 19 13:42:03 crc kubenswrapper[4861]: I0219 13:42:03.175250 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:42:05 crc kubenswrapper[4861]: I0219 13:42:05.190145 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerID="b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5" exitCode=0 Feb 19 13:42:05 crc kubenswrapper[4861]: I0219 13:42:05.190224 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbf4c" event={"ID":"ef8f699f-eca4-4057-b3fe-2831a6e46d38","Type":"ContainerDied","Data":"b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5"} Feb 19 13:42:06 crc kubenswrapper[4861]: I0219 13:42:06.202365 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbf4c" event={"ID":"ef8f699f-eca4-4057-b3fe-2831a6e46d38","Type":"ContainerStarted","Data":"fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0"} Feb 19 13:42:06 crc kubenswrapper[4861]: I0219 13:42:06.224383 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sbf4c" podStartSLOduration=2.781283987 podStartE2EDuration="5.224361103s" podCreationTimestamp="2026-02-19 13:42:01 +0000 UTC" firstStartedPulling="2026-02-19 13:42:03.174998031 +0000 UTC m=+1937.836101269" lastFinishedPulling="2026-02-19 13:42:05.618075157 +0000 UTC m=+1940.279178385" observedRunningTime="2026-02-19 13:42:06.222168293 +0000 UTC m=+1940.883271521" watchObservedRunningTime="2026-02-19 13:42:06.224361103 +0000 UTC m=+1940.885464341" Feb 19 13:42:09 crc kubenswrapper[4861]: I0219 13:42:09.977462 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:42:09 crc kubenswrapper[4861]: E0219 13:42:09.978734 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:42:12 crc kubenswrapper[4861]: I0219 13:42:12.097266 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:12 crc kubenswrapper[4861]: I0219 13:42:12.100087 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:12 crc kubenswrapper[4861]: I0219 13:42:12.169338 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:12 crc kubenswrapper[4861]: I0219 13:42:12.336016 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:12 crc kubenswrapper[4861]: I0219 13:42:12.440656 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbf4c"] Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.286527 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sbf4c" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="registry-server" containerID="cri-o://fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0" gracePeriod=2 Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.731339 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.819228 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-catalog-content\") pod \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.819320 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w89nr\" (UniqueName: \"kubernetes.io/projected/ef8f699f-eca4-4057-b3fe-2831a6e46d38-kube-api-access-w89nr\") pod \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.819368 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-utilities\") pod \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\" (UID: \"ef8f699f-eca4-4057-b3fe-2831a6e46d38\") " Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.821300 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-utilities" (OuterVolumeSpecName: "utilities") pod "ef8f699f-eca4-4057-b3fe-2831a6e46d38" (UID: "ef8f699f-eca4-4057-b3fe-2831a6e46d38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.826114 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8f699f-eca4-4057-b3fe-2831a6e46d38-kube-api-access-w89nr" (OuterVolumeSpecName: "kube-api-access-w89nr") pod "ef8f699f-eca4-4057-b3fe-2831a6e46d38" (UID: "ef8f699f-eca4-4057-b3fe-2831a6e46d38"). InnerVolumeSpecName "kube-api-access-w89nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.891347 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef8f699f-eca4-4057-b3fe-2831a6e46d38" (UID: "ef8f699f-eca4-4057-b3fe-2831a6e46d38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.921224 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.921270 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w89nr\" (UniqueName: \"kubernetes.io/projected/ef8f699f-eca4-4057-b3fe-2831a6e46d38-kube-api-access-w89nr\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:14 crc kubenswrapper[4861]: I0219 13:42:14.921285 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8f699f-eca4-4057-b3fe-2831a6e46d38-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.300526 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerID="fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0" exitCode=0 Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.300641 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbf4c" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.300626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbf4c" event={"ID":"ef8f699f-eca4-4057-b3fe-2831a6e46d38","Type":"ContainerDied","Data":"fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0"} Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.300970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbf4c" event={"ID":"ef8f699f-eca4-4057-b3fe-2831a6e46d38","Type":"ContainerDied","Data":"96085b447f47a5951cb2f28a776fb043fd86f8a08c332b47c8fa864dfff86ba2"} Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.301001 4861 scope.go:117] "RemoveContainer" containerID="fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.332185 4861 scope.go:117] "RemoveContainer" containerID="b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.370886 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbf4c"] Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.389629 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sbf4c"] Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.393900 4861 scope.go:117] "RemoveContainer" containerID="b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.422961 4861 scope.go:117] "RemoveContainer" containerID="fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0" Feb 19 13:42:15 crc kubenswrapper[4861]: E0219 13:42:15.423652 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0\": container with ID starting with fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0 not found: ID does not exist" containerID="fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.423692 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0"} err="failed to get container status \"fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0\": rpc error: code = NotFound desc = could not find container \"fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0\": container with ID starting with fe4858bc15e940482126f90d3bf7f936a7cb8b3a5871f5a7e84b449df52c69c0 not found: ID does not exist" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.423713 4861 scope.go:117] "RemoveContainer" containerID="b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5" Feb 19 13:42:15 crc kubenswrapper[4861]: E0219 13:42:15.424108 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5\": container with ID starting with b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5 not found: ID does not exist" containerID="b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.424131 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5"} err="failed to get container status \"b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5\": rpc error: code = NotFound desc = could not find container \"b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5\": container with ID starting with b01068cc4352cd549481c2517050acb4634e4ed9568cfa4d054977dd2434bdf5 not found: ID does not exist" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.424148 4861 scope.go:117] "RemoveContainer" containerID="b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52" Feb 19 13:42:15 crc kubenswrapper[4861]: E0219 13:42:15.424729 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52\": container with ID starting with b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52 not found: ID does not exist" containerID="b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.424746 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52"} err="failed to get container status \"b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52\": rpc error: code = NotFound desc = could not find container \"b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52\": container with ID starting with b9cea52cd46c9b7051332571e0a2df0d5b61d9318a774aa4cbe2b4c514c8ad52 not found: ID does not exist" Feb 19 13:42:15 crc kubenswrapper[4861]: I0219 13:42:15.993593 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" path="/var/lib/kubelet/pods/ef8f699f-eca4-4057-b3fe-2831a6e46d38/volumes" Feb 19 13:42:20 crc kubenswrapper[4861]: I0219 13:42:20.977843 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:42:20 crc kubenswrapper[4861]: E0219 13:42:20.978997 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:42:32 crc kubenswrapper[4861]: I0219 13:42:32.977551 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:42:32 crc kubenswrapper[4861]: E0219 13:42:32.980047 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:42:45 crc kubenswrapper[4861]: I0219 13:42:45.983113 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:42:45 crc kubenswrapper[4861]: E0219 13:42:45.984126 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:42:56 crc kubenswrapper[4861]: I0219 13:42:56.977320 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:42:56 crc kubenswrapper[4861]: E0219 13:42:56.978394 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:43:09 crc kubenswrapper[4861]: I0219 13:43:09.977534 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:43:10 crc kubenswrapper[4861]: I0219 13:43:10.834739 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"cae0fcad89bda3a6e1d6e8d16c32771de59753e8593c9b7e3c33e31437ee3c81"} Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.616265 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-79fgm"] Feb 19 13:43:48 crc kubenswrapper[4861]: E0219 13:43:48.617141 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="registry-server" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.617160 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="registry-server" Feb 19 13:43:48 crc kubenswrapper[4861]: E0219 13:43:48.617175 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="extract-content" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.617184 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="extract-content" Feb 19 13:43:48 crc kubenswrapper[4861]: E0219 13:43:48.617209 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="extract-utilities" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.617252 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="extract-utilities" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.617472 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f699f-eca4-4057-b3fe-2831a6e46d38" containerName="registry-server" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.618589 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.649568 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79fgm"] Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.753997 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-catalog-content\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.754364 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gstsk\" (UniqueName: \"kubernetes.io/projected/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-kube-api-access-gstsk\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.754645 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-utilities\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.855879 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-utilities\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.855986 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-catalog-content\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.856012 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gstsk\" (UniqueName: \"kubernetes.io/projected/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-kube-api-access-gstsk\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.856495 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-utilities\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.856727 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-catalog-content\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.880547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gstsk\" (UniqueName: \"kubernetes.io/projected/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-kube-api-access-gstsk\") pod \"redhat-operators-79fgm\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:48 crc kubenswrapper[4861]: I0219 13:43:48.975898 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:49 crc kubenswrapper[4861]: I0219 13:43:49.413760 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79fgm"] Feb 19 13:43:50 crc kubenswrapper[4861]: I0219 13:43:50.182956 4861 generic.go:334] "Generic (PLEG): container finished" podID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerID="6944c1e2f3b2db1beece837126c41827b290a4cf349ec5ba8444e8e6b5aa6ab4" exitCode=0 Feb 19 13:43:50 crc kubenswrapper[4861]: I0219 13:43:50.183344 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79fgm" event={"ID":"7ef6b5fd-d7e1-4362-b183-e191fdf46e58","Type":"ContainerDied","Data":"6944c1e2f3b2db1beece837126c41827b290a4cf349ec5ba8444e8e6b5aa6ab4"} Feb 19 13:43:50 crc kubenswrapper[4861]: I0219 13:43:50.183480 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79fgm" event={"ID":"7ef6b5fd-d7e1-4362-b183-e191fdf46e58","Type":"ContainerStarted","Data":"9626a9d70e00b8296b598c2d3643398dcc2cf378e55b4e63c832cff54964aebf"} Feb 19 13:43:52 crc kubenswrapper[4861]: I0219 13:43:52.213118 4861 generic.go:334] "Generic (PLEG): container finished" podID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerID="066cc29b49e7ea50a38ed93e881136362a7067d7a4d8a26ffb124e6e88691e0b" exitCode=0 Feb 19 13:43:52 crc kubenswrapper[4861]: I0219 13:43:52.213209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79fgm" event={"ID":"7ef6b5fd-d7e1-4362-b183-e191fdf46e58","Type":"ContainerDied","Data":"066cc29b49e7ea50a38ed93e881136362a7067d7a4d8a26ffb124e6e88691e0b"} Feb 19 13:43:53 crc kubenswrapper[4861]: I0219 13:43:53.223562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79fgm" event={"ID":"7ef6b5fd-d7e1-4362-b183-e191fdf46e58","Type":"ContainerStarted","Data":"d6ea47ac62c6c3b6df7f4b68ba7132111d6a5226c4c709eefbfb87ee863e7ca2"} Feb 19 13:43:53 crc kubenswrapper[4861]: I0219 13:43:53.248699 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-79fgm" podStartSLOduration=2.640290624 podStartE2EDuration="5.248683812s" podCreationTimestamp="2026-02-19 13:43:48 +0000 UTC" firstStartedPulling="2026-02-19 13:43:50.184993102 +0000 UTC m=+2044.846096340" lastFinishedPulling="2026-02-19 13:43:52.7933863 +0000 UTC m=+2047.454489528" observedRunningTime="2026-02-19 13:43:53.246897603 +0000 UTC m=+2047.908000861" watchObservedRunningTime="2026-02-19 13:43:53.248683812 +0000 UTC m=+2047.909787040" Feb 19 13:43:58 crc kubenswrapper[4861]: I0219 13:43:58.976963 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:43:58 crc kubenswrapper[4861]: I0219 13:43:58.980608 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:44:00 crc kubenswrapper[4861]: I0219 13:44:00.042898 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-79fgm" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="registry-server" probeResult="failure" output=< Feb 19 13:44:00 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 13:44:00 crc kubenswrapper[4861]: > Feb 19 13:44:09 crc kubenswrapper[4861]: I0219 13:44:09.049630 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:44:09 crc kubenswrapper[4861]: I0219 13:44:09.131322 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:44:09 crc kubenswrapper[4861]: I0219 13:44:09.299104 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79fgm"] Feb 19 13:44:10 crc kubenswrapper[4861]: I0219 13:44:10.371338 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-79fgm" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="registry-server" containerID="cri-o://d6ea47ac62c6c3b6df7f4b68ba7132111d6a5226c4c709eefbfb87ee863e7ca2" gracePeriod=2 Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.383106 4861 generic.go:334] "Generic (PLEG): container finished" podID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerID="d6ea47ac62c6c3b6df7f4b68ba7132111d6a5226c4c709eefbfb87ee863e7ca2" exitCode=0 Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.383184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79fgm" event={"ID":"7ef6b5fd-d7e1-4362-b183-e191fdf46e58","Type":"ContainerDied","Data":"d6ea47ac62c6c3b6df7f4b68ba7132111d6a5226c4c709eefbfb87ee863e7ca2"} Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.383359 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79fgm" event={"ID":"7ef6b5fd-d7e1-4362-b183-e191fdf46e58","Type":"ContainerDied","Data":"9626a9d70e00b8296b598c2d3643398dcc2cf378e55b4e63c832cff54964aebf"} Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.383375 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9626a9d70e00b8296b598c2d3643398dcc2cf378e55b4e63c832cff54964aebf" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.425712 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.505569 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-catalog-content\") pod \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.505656 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-utilities\") pod \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.505750 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gstsk\" (UniqueName: \"kubernetes.io/projected/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-kube-api-access-gstsk\") pod \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\" (UID: \"7ef6b5fd-d7e1-4362-b183-e191fdf46e58\") " Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.506750 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-utilities" (OuterVolumeSpecName: "utilities") pod "7ef6b5fd-d7e1-4362-b183-e191fdf46e58" (UID: "7ef6b5fd-d7e1-4362-b183-e191fdf46e58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.511298 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-kube-api-access-gstsk" (OuterVolumeSpecName: "kube-api-access-gstsk") pod "7ef6b5fd-d7e1-4362-b183-e191fdf46e58" (UID: "7ef6b5fd-d7e1-4362-b183-e191fdf46e58"). InnerVolumeSpecName "kube-api-access-gstsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.607640 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.607695 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gstsk\" (UniqueName: \"kubernetes.io/projected/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-kube-api-access-gstsk\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.632908 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ef6b5fd-d7e1-4362-b183-e191fdf46e58" (UID: "7ef6b5fd-d7e1-4362-b183-e191fdf46e58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:44:11 crc kubenswrapper[4861]: I0219 13:44:11.708461 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef6b5fd-d7e1-4362-b183-e191fdf46e58-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:12 crc kubenswrapper[4861]: I0219 13:44:12.394471 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79fgm" Feb 19 13:44:12 crc kubenswrapper[4861]: I0219 13:44:12.441286 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79fgm"] Feb 19 13:44:12 crc kubenswrapper[4861]: I0219 13:44:12.454198 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-79fgm"] Feb 19 13:44:13 crc kubenswrapper[4861]: I0219 13:44:13.992269 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" path="/var/lib/kubelet/pods/7ef6b5fd-d7e1-4362-b183-e191fdf46e58/volumes" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.897089 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tc56k"] Feb 19 13:44:28 crc kubenswrapper[4861]: E0219 13:44:28.898364 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="registry-server" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.898399 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="registry-server" Feb 19 13:44:28 crc kubenswrapper[4861]: E0219 13:44:28.898470 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="extract-content" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.898491 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="extract-content" Feb 19 13:44:28 crc kubenswrapper[4861]: E0219 13:44:28.898630 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="extract-utilities" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.898653 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="extract-utilities" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.898995 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef6b5fd-d7e1-4362-b183-e191fdf46e58" containerName="registry-server" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.901302 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.907116 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc56k"] Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.912882 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvhlc\" (UniqueName: \"kubernetes.io/projected/c34d2c2b-a966-4161-8d31-1b403d91c519-kube-api-access-pvhlc\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.912944 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-catalog-content\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:28 crc kubenswrapper[4861]: I0219 13:44:28.913028 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-utilities\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.014703 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-utilities\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.015141 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvhlc\" (UniqueName: \"kubernetes.io/projected/c34d2c2b-a966-4161-8d31-1b403d91c519-kube-api-access-pvhlc\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.015203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-catalog-content\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.016021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-catalog-content\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.016992 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-utilities\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.042192 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvhlc\" (UniqueName: \"kubernetes.io/projected/c34d2c2b-a966-4161-8d31-1b403d91c519-kube-api-access-pvhlc\") pod \"redhat-marketplace-tc56k\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.236965 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.473885 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc56k"] Feb 19 13:44:29 crc kubenswrapper[4861]: I0219 13:44:29.541428 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc56k" event={"ID":"c34d2c2b-a966-4161-8d31-1b403d91c519","Type":"ContainerStarted","Data":"e0fa9dcb6db36d47a194a0837861958bf1b92d17bfb96d4b2aea51a7b30b16a6"} Feb 19 13:44:30 crc kubenswrapper[4861]: I0219 13:44:30.553099 4861 generic.go:334] "Generic (PLEG): container finished" podID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerID="7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34" exitCode=0 Feb 19 13:44:30 crc kubenswrapper[4861]: I0219 13:44:30.553206 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc56k" event={"ID":"c34d2c2b-a966-4161-8d31-1b403d91c519","Type":"ContainerDied","Data":"7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34"} Feb 19 13:44:32 crc kubenswrapper[4861]: I0219 13:44:32.580060 4861 generic.go:334] "Generic (PLEG): container finished" podID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerID="55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216" exitCode=0 Feb 19 13:44:32 crc kubenswrapper[4861]: I0219 13:44:32.580147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc56k" event={"ID":"c34d2c2b-a966-4161-8d31-1b403d91c519","Type":"ContainerDied","Data":"55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216"} Feb 19 13:44:33 crc kubenswrapper[4861]: I0219 13:44:33.595651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc56k" event={"ID":"c34d2c2b-a966-4161-8d31-1b403d91c519","Type":"ContainerStarted","Data":"913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1"} Feb 19 13:44:33 crc kubenswrapper[4861]: I0219 13:44:33.626062 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tc56k" podStartSLOduration=2.897397262 podStartE2EDuration="5.626045513s" podCreationTimestamp="2026-02-19 13:44:28 +0000 UTC" firstStartedPulling="2026-02-19 13:44:30.556046242 +0000 UTC m=+2085.217149510" lastFinishedPulling="2026-02-19 13:44:33.284694483 +0000 UTC m=+2087.945797761" observedRunningTime="2026-02-19 13:44:33.623929374 +0000 UTC m=+2088.285032692" watchObservedRunningTime="2026-02-19 13:44:33.626045513 +0000 UTC m=+2088.287148751" Feb 19 13:44:39 crc kubenswrapper[4861]: I0219 13:44:39.237169 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:39 crc kubenswrapper[4861]: I0219 13:44:39.237783 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:39 crc kubenswrapper[4861]: I0219 13:44:39.310511 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:39 crc kubenswrapper[4861]: I0219 13:44:39.724016 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:39 crc kubenswrapper[4861]: I0219 13:44:39.804330 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc56k"] Feb 19 13:44:41 crc kubenswrapper[4861]: I0219 13:44:41.664864 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tc56k" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="registry-server" containerID="cri-o://913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1" gracePeriod=2 Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.164134 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.182076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvhlc\" (UniqueName: \"kubernetes.io/projected/c34d2c2b-a966-4161-8d31-1b403d91c519-kube-api-access-pvhlc\") pod \"c34d2c2b-a966-4161-8d31-1b403d91c519\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.182128 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-utilities\") pod \"c34d2c2b-a966-4161-8d31-1b403d91c519\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.182176 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-catalog-content\") pod \"c34d2c2b-a966-4161-8d31-1b403d91c519\" (UID: \"c34d2c2b-a966-4161-8d31-1b403d91c519\") " Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.185378 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-utilities" (OuterVolumeSpecName: "utilities") pod "c34d2c2b-a966-4161-8d31-1b403d91c519" (UID: "c34d2c2b-a966-4161-8d31-1b403d91c519"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.190155 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34d2c2b-a966-4161-8d31-1b403d91c519-kube-api-access-pvhlc" (OuterVolumeSpecName: "kube-api-access-pvhlc") pod "c34d2c2b-a966-4161-8d31-1b403d91c519" (UID: "c34d2c2b-a966-4161-8d31-1b403d91c519"). InnerVolumeSpecName "kube-api-access-pvhlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.244578 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c34d2c2b-a966-4161-8d31-1b403d91c519" (UID: "c34d2c2b-a966-4161-8d31-1b403d91c519"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.286311 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.286357 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvhlc\" (UniqueName: \"kubernetes.io/projected/c34d2c2b-a966-4161-8d31-1b403d91c519-kube-api-access-pvhlc\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.286376 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34d2c2b-a966-4161-8d31-1b403d91c519-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.682611 4861 generic.go:334] "Generic (PLEG): container finished" podID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerID="913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1" exitCode=0 Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.682660 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc56k" event={"ID":"c34d2c2b-a966-4161-8d31-1b403d91c519","Type":"ContainerDied","Data":"913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1"} Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.682689 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tc56k" event={"ID":"c34d2c2b-a966-4161-8d31-1b403d91c519","Type":"ContainerDied","Data":"e0fa9dcb6db36d47a194a0837861958bf1b92d17bfb96d4b2aea51a7b30b16a6"} Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.682709 4861 scope.go:117] "RemoveContainer" containerID="913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.682730 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tc56k" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.722670 4861 scope.go:117] "RemoveContainer" containerID="55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.725398 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc56k"] Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.732466 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tc56k"] Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.743975 4861 scope.go:117] "RemoveContainer" containerID="7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.780304 4861 scope.go:117] "RemoveContainer" containerID="913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1" Feb 19 13:44:42 crc kubenswrapper[4861]: E0219 13:44:42.780930 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1\": container with ID starting with 913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1 not found: ID does not exist" containerID="913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.780983 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1"} err="failed to get container status \"913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1\": rpc error: code = NotFound desc = could not find container \"913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1\": container with ID starting with 913fac86396f54c666e93564ccfc56d7f255fd5ca114623ec2fa6703a46ea0e1 not found: ID does not exist" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.781015 4861 scope.go:117] "RemoveContainer" containerID="55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216" Feb 19 13:44:42 crc kubenswrapper[4861]: E0219 13:44:42.781485 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216\": container with ID starting with 55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216 not found: ID does not exist" containerID="55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.781516 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216"} err="failed to get container status \"55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216\": rpc error: code = NotFound desc = could not find container \"55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216\": container with ID starting with 55a54937f9f097f176d4fd7430487a82e34f883ef249a9c71ef8d67dbada8216 not found: ID does not exist" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.781532 4861 scope.go:117] "RemoveContainer" containerID="7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34" Feb 19 13:44:42 crc kubenswrapper[4861]: E0219 13:44:42.781860 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34\": container with ID starting with 7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34 not found: ID does not exist" containerID="7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34" Feb 19 13:44:42 crc kubenswrapper[4861]: I0219 13:44:42.781925 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34"} err="failed to get container status \"7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34\": rpc error: code = NotFound desc = could not find container \"7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34\": container with ID starting with 7aaba2565e70264cd062841bb9cc5511fe45475606e1cb52a52e7353a8532b34 not found: ID does not exist" Feb 19 13:44:43 crc kubenswrapper[4861]: I0219 13:44:43.998668 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" path="/var/lib/kubelet/pods/c34d2c2b-a966-4161-8d31-1b403d91c519/volumes" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.183185 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5"] Feb 19 13:45:00 crc kubenswrapper[4861]: E0219 13:45:00.184828 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="extract-content" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.184871 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="extract-content" Feb 19 13:45:00 crc kubenswrapper[4861]: E0219 13:45:00.184912 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="extract-utilities" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.184932 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="extract-utilities" Feb 19 13:45:00 crc kubenswrapper[4861]: E0219 13:45:00.184988 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="registry-server" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.185002 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="registry-server" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.185520 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34d2c2b-a966-4161-8d31-1b403d91c519" containerName="registry-server" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.186693 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.193224 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5"] Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.194375 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.194555 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.369991 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzmxz\" (UniqueName: \"kubernetes.io/projected/55db1743-1443-4183-ad09-bd3cc60e2ffe-kube-api-access-qzmxz\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.370112 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55db1743-1443-4183-ad09-bd3cc60e2ffe-secret-volume\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.370226 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55db1743-1443-4183-ad09-bd3cc60e2ffe-config-volume\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.471588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55db1743-1443-4183-ad09-bd3cc60e2ffe-secret-volume\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.471694 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55db1743-1443-4183-ad09-bd3cc60e2ffe-config-volume\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.471730 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzmxz\" (UniqueName: \"kubernetes.io/projected/55db1743-1443-4183-ad09-bd3cc60e2ffe-kube-api-access-qzmxz\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.474188 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55db1743-1443-4183-ad09-bd3cc60e2ffe-config-volume\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.478769 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55db1743-1443-4183-ad09-bd3cc60e2ffe-secret-volume\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.492242 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzmxz\" (UniqueName: \"kubernetes.io/projected/55db1743-1443-4183-ad09-bd3cc60e2ffe-kube-api-access-qzmxz\") pod \"collect-profiles-29525145-f6xr5\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.527628 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.740157 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5"] Feb 19 13:45:00 crc kubenswrapper[4861]: I0219 13:45:00.849863 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" event={"ID":"55db1743-1443-4183-ad09-bd3cc60e2ffe","Type":"ContainerStarted","Data":"3c2e60f8ed2e97086f66f81019c574f47750fc19efd9147584b57217e6deced2"} Feb 19 13:45:01 crc kubenswrapper[4861]: I0219 13:45:01.857188 4861 generic.go:334] "Generic (PLEG): container finished" podID="55db1743-1443-4183-ad09-bd3cc60e2ffe" containerID="86ae0124ea474284237b6f2eeb6d744c770af1e939b8e36576e5f17981525868" exitCode=0 Feb 19 13:45:01 crc kubenswrapper[4861]: I0219 13:45:01.857394 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" event={"ID":"55db1743-1443-4183-ad09-bd3cc60e2ffe","Type":"ContainerDied","Data":"86ae0124ea474284237b6f2eeb6d744c770af1e939b8e36576e5f17981525868"} Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.185145 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.311637 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55db1743-1443-4183-ad09-bd3cc60e2ffe-secret-volume\") pod \"55db1743-1443-4183-ad09-bd3cc60e2ffe\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.311798 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55db1743-1443-4183-ad09-bd3cc60e2ffe-config-volume\") pod \"55db1743-1443-4183-ad09-bd3cc60e2ffe\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.311846 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzmxz\" (UniqueName: \"kubernetes.io/projected/55db1743-1443-4183-ad09-bd3cc60e2ffe-kube-api-access-qzmxz\") pod \"55db1743-1443-4183-ad09-bd3cc60e2ffe\" (UID: \"55db1743-1443-4183-ad09-bd3cc60e2ffe\") " Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.312607 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55db1743-1443-4183-ad09-bd3cc60e2ffe-config-volume" (OuterVolumeSpecName: "config-volume") pod "55db1743-1443-4183-ad09-bd3cc60e2ffe" (UID: "55db1743-1443-4183-ad09-bd3cc60e2ffe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.320178 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55db1743-1443-4183-ad09-bd3cc60e2ffe-kube-api-access-qzmxz" (OuterVolumeSpecName: "kube-api-access-qzmxz") pod "55db1743-1443-4183-ad09-bd3cc60e2ffe" (UID: "55db1743-1443-4183-ad09-bd3cc60e2ffe"). InnerVolumeSpecName "kube-api-access-qzmxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.320756 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55db1743-1443-4183-ad09-bd3cc60e2ffe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "55db1743-1443-4183-ad09-bd3cc60e2ffe" (UID: "55db1743-1443-4183-ad09-bd3cc60e2ffe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.414393 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55db1743-1443-4183-ad09-bd3cc60e2ffe-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.414466 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzmxz\" (UniqueName: \"kubernetes.io/projected/55db1743-1443-4183-ad09-bd3cc60e2ffe-kube-api-access-qzmxz\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.414486 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55db1743-1443-4183-ad09-bd3cc60e2ffe-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.877146 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" event={"ID":"55db1743-1443-4183-ad09-bd3cc60e2ffe","Type":"ContainerDied","Data":"3c2e60f8ed2e97086f66f81019c574f47750fc19efd9147584b57217e6deced2"} Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.877207 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c2e60f8ed2e97086f66f81019c574f47750fc19efd9147584b57217e6deced2" Feb 19 13:45:03 crc kubenswrapper[4861]: I0219 13:45:03.877258 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5" Feb 19 13:45:04 crc kubenswrapper[4861]: I0219 13:45:04.277340 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn"] Feb 19 13:45:04 crc kubenswrapper[4861]: I0219 13:45:04.290055 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-bq4kn"] Feb 19 13:45:05 crc kubenswrapper[4861]: I0219 13:45:05.987748 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25018c53-6299-4fab-bf5e-819ba4f84596" path="/var/lib/kubelet/pods/25018c53-6299-4fab-bf5e-819ba4f84596/volumes" Feb 19 13:45:33 crc kubenswrapper[4861]: I0219 13:45:33.834824 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:45:33 crc kubenswrapper[4861]: I0219 13:45:33.835417 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.287512 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ltz8s"] Feb 19 13:45:53 crc kubenswrapper[4861]: E0219 13:45:53.288580 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55db1743-1443-4183-ad09-bd3cc60e2ffe" containerName="collect-profiles" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.288601 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="55db1743-1443-4183-ad09-bd3cc60e2ffe" containerName="collect-profiles" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.288838 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="55db1743-1443-4183-ad09-bd3cc60e2ffe" containerName="collect-profiles" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.290690 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.300455 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltz8s"] Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.439289 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-catalog-content\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.439415 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-utilities\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.439628 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99r7g\" (UniqueName: \"kubernetes.io/projected/50eba1fd-41c7-43ed-8489-6be52f99a8f3-kube-api-access-99r7g\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.540736 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-catalog-content\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.540801 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-utilities\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.540848 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99r7g\" (UniqueName: \"kubernetes.io/projected/50eba1fd-41c7-43ed-8489-6be52f99a8f3-kube-api-access-99r7g\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.541382 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-catalog-content\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.541440 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-utilities\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.564173 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99r7g\" (UniqueName: \"kubernetes.io/projected/50eba1fd-41c7-43ed-8489-6be52f99a8f3-kube-api-access-99r7g\") pod \"community-operators-ltz8s\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:53 crc kubenswrapper[4861]: I0219 13:45:53.611083 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:45:54 crc kubenswrapper[4861]: I0219 13:45:54.142487 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltz8s"] Feb 19 13:45:54 crc kubenswrapper[4861]: I0219 13:45:54.311912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltz8s" event={"ID":"50eba1fd-41c7-43ed-8489-6be52f99a8f3","Type":"ContainerStarted","Data":"f63659794cf9dce24b403b29c36a2431013a66299c68dc29889ca6a899f06080"} Feb 19 13:45:55 crc kubenswrapper[4861]: I0219 13:45:55.320594 4861 generic.go:334] "Generic (PLEG): container finished" podID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerID="51b1c14e106c7c33a4881f705002925310676d4d70bb0ba071703aac1552a0b7" exitCode=0 Feb 19 13:45:55 crc kubenswrapper[4861]: I0219 13:45:55.320922 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltz8s" event={"ID":"50eba1fd-41c7-43ed-8489-6be52f99a8f3","Type":"ContainerDied","Data":"51b1c14e106c7c33a4881f705002925310676d4d70bb0ba071703aac1552a0b7"} Feb 19 13:45:57 crc kubenswrapper[4861]: I0219 13:45:57.339108 4861 generic.go:334] "Generic (PLEG): container finished" podID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerID="4ddb0f0627797e585cdc73fa87c70dc914b6e05cb2d485186f4a94f5281d59b2" exitCode=0 Feb 19 13:45:57 crc kubenswrapper[4861]: I0219 13:45:57.339217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltz8s" event={"ID":"50eba1fd-41c7-43ed-8489-6be52f99a8f3","Type":"ContainerDied","Data":"4ddb0f0627797e585cdc73fa87c70dc914b6e05cb2d485186f4a94f5281d59b2"} Feb 19 13:45:58 crc kubenswrapper[4861]: I0219 13:45:58.351680 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltz8s" event={"ID":"50eba1fd-41c7-43ed-8489-6be52f99a8f3","Type":"ContainerStarted","Data":"da5c3b5fedb8ba71dde68e4a2fbc532dbac982c54244a0e41c3a45a18560e05a"} Feb 19 13:45:58 crc kubenswrapper[4861]: I0219 13:45:58.378854 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ltz8s" podStartSLOduration=2.927356577 podStartE2EDuration="5.378827777s" podCreationTimestamp="2026-02-19 13:45:53 +0000 UTC" firstStartedPulling="2026-02-19 13:45:55.322781809 +0000 UTC m=+2169.983885027" lastFinishedPulling="2026-02-19 13:45:57.774252959 +0000 UTC m=+2172.435356227" observedRunningTime="2026-02-19 13:45:58.375692521 +0000 UTC m=+2173.036795779" watchObservedRunningTime="2026-02-19 13:45:58.378827777 +0000 UTC m=+2173.039931015" Feb 19 13:46:00 crc kubenswrapper[4861]: I0219 13:46:00.121791 4861 scope.go:117] "RemoveContainer" containerID="4f7f19be40a8bd68df3887367c20fd6ec222fa1b6b9e9fd45a86865339755802" Feb 19 13:46:03 crc kubenswrapper[4861]: I0219 13:46:03.611280 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:46:03 crc kubenswrapper[4861]: I0219 13:46:03.611726 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:46:03 crc kubenswrapper[4861]: I0219 13:46:03.669225 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:46:03 crc kubenswrapper[4861]: I0219 13:46:03.835056 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:46:03 crc kubenswrapper[4861]: I0219 13:46:03.835164 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:46:04 crc kubenswrapper[4861]: I0219 13:46:04.480631 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:46:04 crc kubenswrapper[4861]: I0219 13:46:04.578335 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltz8s"] Feb 19 13:46:06 crc kubenswrapper[4861]: I0219 13:46:06.421631 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ltz8s" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="registry-server" containerID="cri-o://da5c3b5fedb8ba71dde68e4a2fbc532dbac982c54244a0e41c3a45a18560e05a" gracePeriod=2 Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.433162 4861 generic.go:334] "Generic (PLEG): container finished" podID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerID="da5c3b5fedb8ba71dde68e4a2fbc532dbac982c54244a0e41c3a45a18560e05a" exitCode=0 Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.433219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltz8s" event={"ID":"50eba1fd-41c7-43ed-8489-6be52f99a8f3","Type":"ContainerDied","Data":"da5c3b5fedb8ba71dde68e4a2fbc532dbac982c54244a0e41c3a45a18560e05a"} Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.788698 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.890295 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99r7g\" (UniqueName: \"kubernetes.io/projected/50eba1fd-41c7-43ed-8489-6be52f99a8f3-kube-api-access-99r7g\") pod \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.890459 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-catalog-content\") pod \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.890558 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-utilities\") pod \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\" (UID: \"50eba1fd-41c7-43ed-8489-6be52f99a8f3\") " Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.892266 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-utilities" (OuterVolumeSpecName: "utilities") pod "50eba1fd-41c7-43ed-8489-6be52f99a8f3" (UID: "50eba1fd-41c7-43ed-8489-6be52f99a8f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.900041 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eba1fd-41c7-43ed-8489-6be52f99a8f3-kube-api-access-99r7g" (OuterVolumeSpecName: "kube-api-access-99r7g") pod "50eba1fd-41c7-43ed-8489-6be52f99a8f3" (UID: "50eba1fd-41c7-43ed-8489-6be52f99a8f3"). InnerVolumeSpecName "kube-api-access-99r7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.969718 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50eba1fd-41c7-43ed-8489-6be52f99a8f3" (UID: "50eba1fd-41c7-43ed-8489-6be52f99a8f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.991895 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.991930 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba1fd-41c7-43ed-8489-6be52f99a8f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:46:07 crc kubenswrapper[4861]: I0219 13:46:07.991944 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99r7g\" (UniqueName: \"kubernetes.io/projected/50eba1fd-41c7-43ed-8489-6be52f99a8f3-kube-api-access-99r7g\") on node \"crc\" DevicePath \"\"" Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.444923 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltz8s" event={"ID":"50eba1fd-41c7-43ed-8489-6be52f99a8f3","Type":"ContainerDied","Data":"f63659794cf9dce24b403b29c36a2431013a66299c68dc29889ca6a899f06080"} Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.445328 4861 scope.go:117] "RemoveContainer" containerID="da5c3b5fedb8ba71dde68e4a2fbc532dbac982c54244a0e41c3a45a18560e05a" Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.445037 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltz8s" Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.466261 4861 scope.go:117] "RemoveContainer" containerID="4ddb0f0627797e585cdc73fa87c70dc914b6e05cb2d485186f4a94f5281d59b2" Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.473636 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltz8s"] Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.484646 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ltz8s"] Feb 19 13:46:08 crc kubenswrapper[4861]: I0219 13:46:08.502606 4861 scope.go:117] "RemoveContainer" containerID="51b1c14e106c7c33a4881f705002925310676d4d70bb0ba071703aac1552a0b7" Feb 19 13:46:09 crc kubenswrapper[4861]: I0219 13:46:09.989096 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" path="/var/lib/kubelet/pods/50eba1fd-41c7-43ed-8489-6be52f99a8f3/volumes" Feb 19 13:46:33 crc kubenswrapper[4861]: I0219 13:46:33.835362 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:46:33 crc kubenswrapper[4861]: I0219 13:46:33.836096 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:46:33 crc kubenswrapper[4861]: I0219 13:46:33.836196 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:46:33 crc kubenswrapper[4861]: I0219 13:46:33.837238 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cae0fcad89bda3a6e1d6e8d16c32771de59753e8593c9b7e3c33e31437ee3c81"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:46:33 crc kubenswrapper[4861]: I0219 13:46:33.837364 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://cae0fcad89bda3a6e1d6e8d16c32771de59753e8593c9b7e3c33e31437ee3c81" gracePeriod=600 Feb 19 13:46:34 crc kubenswrapper[4861]: I0219 13:46:34.686727 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="cae0fcad89bda3a6e1d6e8d16c32771de59753e8593c9b7e3c33e31437ee3c81" exitCode=0 Feb 19 13:46:34 crc kubenswrapper[4861]: I0219 13:46:34.686804 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"cae0fcad89bda3a6e1d6e8d16c32771de59753e8593c9b7e3c33e31437ee3c81"} Feb 19 13:46:34 crc kubenswrapper[4861]: I0219 13:46:34.687758 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421"} Feb 19 13:46:34 crc kubenswrapper[4861]: I0219 13:46:34.687782 4861 scope.go:117] "RemoveContainer" containerID="d1328e3bb9a7119a5d11f8614f4dbd03d8c4f1ab29a669ab71e4d9be92fd0213" Feb 19 13:49:03 crc kubenswrapper[4861]: I0219 13:49:03.835075 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:49:03 crc kubenswrapper[4861]: I0219 13:49:03.836004 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:49:33 crc kubenswrapper[4861]: I0219 13:49:33.834857 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:49:33 crc kubenswrapper[4861]: I0219 13:49:33.835633 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:50:00 crc kubenswrapper[4861]: I0219 13:50:00.256669 4861 scope.go:117] "RemoveContainer" containerID="d6ea47ac62c6c3b6df7f4b68ba7132111d6a5226c4c709eefbfb87ee863e7ca2" Feb 19 13:50:00 crc kubenswrapper[4861]: I0219 13:50:00.285464 4861 scope.go:117] "RemoveContainer" containerID="066cc29b49e7ea50a38ed93e881136362a7067d7a4d8a26ffb124e6e88691e0b" Feb 19 13:50:00 crc kubenswrapper[4861]: I0219 13:50:00.326662 4861 scope.go:117] "RemoveContainer" containerID="6944c1e2f3b2db1beece837126c41827b290a4cf349ec5ba8444e8e6b5aa6ab4" Feb 19 13:50:03 crc kubenswrapper[4861]: I0219 13:50:03.835254 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:50:03 crc kubenswrapper[4861]: I0219 13:50:03.836307 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:50:03 crc kubenswrapper[4861]: I0219 13:50:03.836406 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:50:03 crc kubenswrapper[4861]: I0219 13:50:03.838148 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:50:03 crc kubenswrapper[4861]: I0219 13:50:03.838364 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" gracePeriod=600 Feb 19 13:50:03 crc kubenswrapper[4861]: E0219 13:50:03.976026 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:50:04 crc kubenswrapper[4861]: I0219 13:50:04.682542 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" exitCode=0 Feb 19 13:50:04 crc kubenswrapper[4861]: I0219 13:50:04.682594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421"} Feb 19 13:50:04 crc kubenswrapper[4861]: I0219 13:50:04.682657 4861 scope.go:117] "RemoveContainer" containerID="cae0fcad89bda3a6e1d6e8d16c32771de59753e8593c9b7e3c33e31437ee3c81" Feb 19 13:50:04 crc kubenswrapper[4861]: I0219 13:50:04.683118 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:50:04 crc kubenswrapper[4861]: E0219 13:50:04.683359 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:50:15 crc kubenswrapper[4861]: I0219 13:50:15.989064 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:50:15 crc kubenswrapper[4861]: E0219 13:50:15.990160 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:50:29 crc kubenswrapper[4861]: I0219 13:50:29.977723 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:50:29 crc kubenswrapper[4861]: E0219 13:50:29.978776 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:50:43 crc kubenswrapper[4861]: I0219 13:50:43.977746 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:50:43 crc kubenswrapper[4861]: E0219 13:50:43.979629 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:50:57 crc kubenswrapper[4861]: I0219 13:50:57.977356 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:50:57 crc kubenswrapper[4861]: E0219 13:50:57.980845 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:51:10 crc kubenswrapper[4861]: I0219 13:51:10.977644 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:51:10 crc kubenswrapper[4861]: E0219 13:51:10.978919 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:51:22 crc kubenswrapper[4861]: I0219 13:51:22.977731 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:51:22 crc kubenswrapper[4861]: E0219 13:51:22.978943 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:51:37 crc kubenswrapper[4861]: I0219 13:51:37.978812 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:51:37 crc kubenswrapper[4861]: E0219 13:51:37.979888 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:51:52 crc kubenswrapper[4861]: I0219 13:51:52.978872 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:51:52 crc kubenswrapper[4861]: E0219 13:51:52.980288 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:52:05 crc kubenswrapper[4861]: I0219 13:52:05.992809 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:52:05 crc kubenswrapper[4861]: E0219 13:52:05.994081 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.902090 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d6dbl"] Feb 19 13:52:13 crc kubenswrapper[4861]: E0219 13:52:13.902811 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="extract-content" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.902827 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="extract-content" Feb 19 13:52:13 crc kubenswrapper[4861]: E0219 13:52:13.902849 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="extract-utilities" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.902855 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="extract-utilities" Feb 19 13:52:13 crc kubenswrapper[4861]: E0219 13:52:13.902872 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="registry-server" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.902878 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="registry-server" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.903025 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eba1fd-41c7-43ed-8489-6be52f99a8f3" containerName="registry-server" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.904041 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:13 crc kubenswrapper[4861]: I0219 13:52:13.928183 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6dbl"] Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.009633 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9v6b\" (UniqueName: \"kubernetes.io/projected/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-kube-api-access-m9v6b\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.010089 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-utilities\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.010184 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-catalog-content\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.111703 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9v6b\" (UniqueName: \"kubernetes.io/projected/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-kube-api-access-m9v6b\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.111859 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-utilities\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.111945 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-catalog-content\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.112463 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-utilities\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.112738 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-catalog-content\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.147266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9v6b\" (UniqueName: \"kubernetes.io/projected/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-kube-api-access-m9v6b\") pod \"certified-operators-d6dbl\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.244183 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:14 crc kubenswrapper[4861]: I0219 13:52:14.521847 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6dbl"] Feb 19 13:52:15 crc kubenswrapper[4861]: I0219 13:52:15.128663 4861 generic.go:334] "Generic (PLEG): container finished" podID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerID="ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75" exitCode=0 Feb 19 13:52:15 crc kubenswrapper[4861]: I0219 13:52:15.128743 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6dbl" event={"ID":"618a5524-80ae-4e80-ba9a-3ad5011f2aeb","Type":"ContainerDied","Data":"ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75"} Feb 19 13:52:15 crc kubenswrapper[4861]: I0219 13:52:15.128918 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6dbl" event={"ID":"618a5524-80ae-4e80-ba9a-3ad5011f2aeb","Type":"ContainerStarted","Data":"3725126677fe207627dc7d723a09b068dd19798e6722a251b926eab63dccbba9"} Feb 19 13:52:15 crc kubenswrapper[4861]: I0219 13:52:15.130008 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:52:17 crc kubenswrapper[4861]: I0219 13:52:17.151771 4861 generic.go:334] "Generic (PLEG): container finished" podID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerID="1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8" exitCode=0 Feb 19 13:52:17 crc kubenswrapper[4861]: I0219 13:52:17.152009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6dbl" event={"ID":"618a5524-80ae-4e80-ba9a-3ad5011f2aeb","Type":"ContainerDied","Data":"1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8"} Feb 19 13:52:18 crc kubenswrapper[4861]: I0219 13:52:18.159371 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6dbl" event={"ID":"618a5524-80ae-4e80-ba9a-3ad5011f2aeb","Type":"ContainerStarted","Data":"9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24"} Feb 19 13:52:18 crc kubenswrapper[4861]: I0219 13:52:18.184745 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d6dbl" podStartSLOduration=2.568967788 podStartE2EDuration="5.184727298s" podCreationTimestamp="2026-02-19 13:52:13 +0000 UTC" firstStartedPulling="2026-02-19 13:52:15.129814183 +0000 UTC m=+2549.790917411" lastFinishedPulling="2026-02-19 13:52:17.745573653 +0000 UTC m=+2552.406676921" observedRunningTime="2026-02-19 13:52:18.182775335 +0000 UTC m=+2552.843878563" watchObservedRunningTime="2026-02-19 13:52:18.184727298 +0000 UTC m=+2552.845830536" Feb 19 13:52:18 crc kubenswrapper[4861]: I0219 13:52:18.977959 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:52:18 crc kubenswrapper[4861]: E0219 13:52:18.978186 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:52:24 crc kubenswrapper[4861]: I0219 13:52:24.244759 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:24 crc kubenswrapper[4861]: I0219 13:52:24.245257 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:24 crc kubenswrapper[4861]: I0219 13:52:24.292178 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:25 crc kubenswrapper[4861]: I0219 13:52:25.293033 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:25 crc kubenswrapper[4861]: I0219 13:52:25.361678 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6dbl"] Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.232463 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d6dbl" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="registry-server" containerID="cri-o://9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24" gracePeriod=2 Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.798315 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.931947 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-catalog-content\") pod \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.932064 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9v6b\" (UniqueName: \"kubernetes.io/projected/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-kube-api-access-m9v6b\") pod \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.932277 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-utilities\") pod \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\" (UID: \"618a5524-80ae-4e80-ba9a-3ad5011f2aeb\") " Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.934046 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-utilities" (OuterVolumeSpecName: "utilities") pod "618a5524-80ae-4e80-ba9a-3ad5011f2aeb" (UID: "618a5524-80ae-4e80-ba9a-3ad5011f2aeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:52:27 crc kubenswrapper[4861]: I0219 13:52:27.941015 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-kube-api-access-m9v6b" (OuterVolumeSpecName: "kube-api-access-m9v6b") pod "618a5524-80ae-4e80-ba9a-3ad5011f2aeb" (UID: "618a5524-80ae-4e80-ba9a-3ad5011f2aeb"). InnerVolumeSpecName "kube-api-access-m9v6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.020523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "618a5524-80ae-4e80-ba9a-3ad5011f2aeb" (UID: "618a5524-80ae-4e80-ba9a-3ad5011f2aeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.034831 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.034853 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9v6b\" (UniqueName: \"kubernetes.io/projected/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-kube-api-access-m9v6b\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.034865 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618a5524-80ae-4e80-ba9a-3ad5011f2aeb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.245668 4861 generic.go:334] "Generic (PLEG): container finished" podID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerID="9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24" exitCode=0 Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.245732 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6dbl" event={"ID":"618a5524-80ae-4e80-ba9a-3ad5011f2aeb","Type":"ContainerDied","Data":"9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24"} Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.245764 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6dbl" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.245784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6dbl" event={"ID":"618a5524-80ae-4e80-ba9a-3ad5011f2aeb","Type":"ContainerDied","Data":"3725126677fe207627dc7d723a09b068dd19798e6722a251b926eab63dccbba9"} Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.245814 4861 scope.go:117] "RemoveContainer" containerID="9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.271854 4861 scope.go:117] "RemoveContainer" containerID="1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.292688 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6dbl"] Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.299764 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d6dbl"] Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.311051 4861 scope.go:117] "RemoveContainer" containerID="ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.346449 4861 scope.go:117] "RemoveContainer" containerID="9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24" Feb 19 13:52:28 crc kubenswrapper[4861]: E0219 13:52:28.347071 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24\": container with ID starting with 9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24 not found: ID does not exist" containerID="9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.347125 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24"} err="failed to get container status \"9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24\": rpc error: code = NotFound desc = could not find container \"9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24\": container with ID starting with 9257a788de93a4879d5317b0c91ab74f4093019dfed76c6efa74e19097b73d24 not found: ID does not exist" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.347163 4861 scope.go:117] "RemoveContainer" containerID="1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8" Feb 19 13:52:28 crc kubenswrapper[4861]: E0219 13:52:28.348299 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8\": container with ID starting with 1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8 not found: ID does not exist" containerID="1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.348355 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8"} err="failed to get container status \"1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8\": rpc error: code = NotFound desc = could not find container \"1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8\": container with ID starting with 1929105218b0e59def0df443e24f93282c9eede3871262fb66617ea7413129d8 not found: ID does not exist" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.348390 4861 scope.go:117] "RemoveContainer" containerID="ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75" Feb 19 13:52:28 crc kubenswrapper[4861]: E0219 13:52:28.348896 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75\": container with ID starting with ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75 not found: ID does not exist" containerID="ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75" Feb 19 13:52:28 crc kubenswrapper[4861]: I0219 13:52:28.348935 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75"} err="failed to get container status \"ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75\": rpc error: code = NotFound desc = could not find container \"ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75\": container with ID starting with ca5348f4e6e6c207754e1c37fd0fed61e6d2f260755053fb85ec2de09770ef75 not found: ID does not exist" Feb 19 13:52:29 crc kubenswrapper[4861]: I0219 13:52:29.997211 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" path="/var/lib/kubelet/pods/618a5524-80ae-4e80-ba9a-3ad5011f2aeb/volumes" Feb 19 13:52:31 crc kubenswrapper[4861]: I0219 13:52:31.977930 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:52:31 crc kubenswrapper[4861]: E0219 13:52:31.978498 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:52:44 crc kubenswrapper[4861]: I0219 13:52:44.978164 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:52:44 crc kubenswrapper[4861]: E0219 13:52:44.979154 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:52:56 crc kubenswrapper[4861]: I0219 13:52:56.992686 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:52:56 crc kubenswrapper[4861]: E0219 13:52:56.993340 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:53:08 crc kubenswrapper[4861]: I0219 13:53:08.977757 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:53:08 crc kubenswrapper[4861]: E0219 13:53:08.978849 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:53:21 crc kubenswrapper[4861]: I0219 13:53:21.977259 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:53:21 crc kubenswrapper[4861]: E0219 13:53:21.979828 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:53:33 crc kubenswrapper[4861]: I0219 13:53:33.976915 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:53:33 crc kubenswrapper[4861]: E0219 13:53:33.977791 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:53:48 crc kubenswrapper[4861]: I0219 13:53:48.977394 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:53:48 crc kubenswrapper[4861]: E0219 13:53:48.978398 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:54:01 crc kubenswrapper[4861]: I0219 13:54:01.978061 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:54:01 crc kubenswrapper[4861]: E0219 13:54:01.981192 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:54:16 crc kubenswrapper[4861]: I0219 13:54:16.977690 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:54:16 crc kubenswrapper[4861]: E0219 13:54:16.978972 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.455379 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gwrd8"] Feb 19 13:54:19 crc kubenswrapper[4861]: E0219 13:54:19.455730 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="extract-content" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.455746 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="extract-content" Feb 19 13:54:19 crc kubenswrapper[4861]: E0219 13:54:19.455759 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="extract-utilities" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.455767 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="extract-utilities" Feb 19 13:54:19 crc kubenswrapper[4861]: E0219 13:54:19.455785 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="registry-server" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.455793 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="registry-server" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.455974 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="618a5524-80ae-4e80-ba9a-3ad5011f2aeb" containerName="registry-server" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.457424 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.490625 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gwrd8"] Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.590706 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-utilities\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.590892 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-catalog-content\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.590977 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zksr\" (UniqueName: \"kubernetes.io/projected/e00d4710-a045-474d-96c6-30be90a34289-kube-api-access-4zksr\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.691978 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zksr\" (UniqueName: \"kubernetes.io/projected/e00d4710-a045-474d-96c6-30be90a34289-kube-api-access-4zksr\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.692086 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-utilities\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.692160 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-catalog-content\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.692689 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-catalog-content\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.693113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-utilities\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.718810 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zksr\" (UniqueName: \"kubernetes.io/projected/e00d4710-a045-474d-96c6-30be90a34289-kube-api-access-4zksr\") pod \"redhat-operators-gwrd8\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:19 crc kubenswrapper[4861]: I0219 13:54:19.777623 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:20 crc kubenswrapper[4861]: I0219 13:54:20.309393 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gwrd8"] Feb 19 13:54:21 crc kubenswrapper[4861]: I0219 13:54:21.326491 4861 generic.go:334] "Generic (PLEG): container finished" podID="e00d4710-a045-474d-96c6-30be90a34289" containerID="4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df" exitCode=0 Feb 19 13:54:21 crc kubenswrapper[4861]: I0219 13:54:21.326634 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerDied","Data":"4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df"} Feb 19 13:54:21 crc kubenswrapper[4861]: I0219 13:54:21.326931 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerStarted","Data":"ab87fbf390eedfc3f5dfb2e1b7036894de1dd31958ac33a9709c14723a27f039"} Feb 19 13:54:22 crc kubenswrapper[4861]: I0219 13:54:22.339828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerStarted","Data":"106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6"} Feb 19 13:54:23 crc kubenswrapper[4861]: I0219 13:54:23.353008 4861 generic.go:334] "Generic (PLEG): container finished" podID="e00d4710-a045-474d-96c6-30be90a34289" containerID="106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6" exitCode=0 Feb 19 13:54:23 crc kubenswrapper[4861]: I0219 13:54:23.353082 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerDied","Data":"106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6"} Feb 19 13:54:24 crc kubenswrapper[4861]: I0219 13:54:24.367576 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerStarted","Data":"405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81"} Feb 19 13:54:24 crc kubenswrapper[4861]: I0219 13:54:24.397759 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gwrd8" podStartSLOduration=2.959678296 podStartE2EDuration="5.39773908s" podCreationTimestamp="2026-02-19 13:54:19 +0000 UTC" firstStartedPulling="2026-02-19 13:54:21.329694701 +0000 UTC m=+2675.990797929" lastFinishedPulling="2026-02-19 13:54:23.767755475 +0000 UTC m=+2678.428858713" observedRunningTime="2026-02-19 13:54:24.394650296 +0000 UTC m=+2679.055753594" watchObservedRunningTime="2026-02-19 13:54:24.39773908 +0000 UTC m=+2679.058842328" Feb 19 13:54:27 crc kubenswrapper[4861]: I0219 13:54:27.977991 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:54:27 crc kubenswrapper[4861]: E0219 13:54:27.978599 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:54:29 crc kubenswrapper[4861]: I0219 13:54:29.778607 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:29 crc kubenswrapper[4861]: I0219 13:54:29.779102 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:30 crc kubenswrapper[4861]: I0219 13:54:30.840810 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gwrd8" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="registry-server" probeResult="failure" output=< Feb 19 13:54:30 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 13:54:30 crc kubenswrapper[4861]: > Feb 19 13:54:39 crc kubenswrapper[4861]: I0219 13:54:39.854894 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:39 crc kubenswrapper[4861]: I0219 13:54:39.939381 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:40 crc kubenswrapper[4861]: I0219 13:54:40.129765 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gwrd8"] Feb 19 13:54:41 crc kubenswrapper[4861]: I0219 13:54:41.530036 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gwrd8" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="registry-server" containerID="cri-o://405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81" gracePeriod=2 Feb 19 13:54:41 crc kubenswrapper[4861]: I0219 13:54:41.977338 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:54:41 crc kubenswrapper[4861]: E0219 13:54:41.978181 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.070745 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.123620 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-catalog-content\") pod \"e00d4710-a045-474d-96c6-30be90a34289\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.123789 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-utilities\") pod \"e00d4710-a045-474d-96c6-30be90a34289\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.123844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zksr\" (UniqueName: \"kubernetes.io/projected/e00d4710-a045-474d-96c6-30be90a34289-kube-api-access-4zksr\") pod \"e00d4710-a045-474d-96c6-30be90a34289\" (UID: \"e00d4710-a045-474d-96c6-30be90a34289\") " Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.125474 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-utilities" (OuterVolumeSpecName: "utilities") pod "e00d4710-a045-474d-96c6-30be90a34289" (UID: "e00d4710-a045-474d-96c6-30be90a34289"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.138684 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00d4710-a045-474d-96c6-30be90a34289-kube-api-access-4zksr" (OuterVolumeSpecName: "kube-api-access-4zksr") pod "e00d4710-a045-474d-96c6-30be90a34289" (UID: "e00d4710-a045-474d-96c6-30be90a34289"). InnerVolumeSpecName "kube-api-access-4zksr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.225954 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.226329 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zksr\" (UniqueName: \"kubernetes.io/projected/e00d4710-a045-474d-96c6-30be90a34289-kube-api-access-4zksr\") on node \"crc\" DevicePath \"\"" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.320923 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e00d4710-a045-474d-96c6-30be90a34289" (UID: "e00d4710-a045-474d-96c6-30be90a34289"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.328224 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00d4710-a045-474d-96c6-30be90a34289-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.542869 4861 generic.go:334] "Generic (PLEG): container finished" podID="e00d4710-a045-474d-96c6-30be90a34289" containerID="405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81" exitCode=0 Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.542923 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerDied","Data":"405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81"} Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.542980 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwrd8" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.543005 4861 scope.go:117] "RemoveContainer" containerID="405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.542989 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwrd8" event={"ID":"e00d4710-a045-474d-96c6-30be90a34289","Type":"ContainerDied","Data":"ab87fbf390eedfc3f5dfb2e1b7036894de1dd31958ac33a9709c14723a27f039"} Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.573548 4861 scope.go:117] "RemoveContainer" containerID="106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.598360 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gwrd8"] Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.608850 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gwrd8"] Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.624020 4861 scope.go:117] "RemoveContainer" containerID="4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.654915 4861 scope.go:117] "RemoveContainer" containerID="405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81" Feb 19 13:54:42 crc kubenswrapper[4861]: E0219 13:54:42.655726 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81\": container with ID starting with 405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81 not found: ID does not exist" containerID="405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.655779 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81"} err="failed to get container status \"405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81\": rpc error: code = NotFound desc = could not find container \"405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81\": container with ID starting with 405dd48d0d0af2aa7edd2c7067582d3d1702123cdb828e6ad39eaabc4ba3bd81 not found: ID does not exist" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.655820 4861 scope.go:117] "RemoveContainer" containerID="106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6" Feb 19 13:54:42 crc kubenswrapper[4861]: E0219 13:54:42.656218 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6\": container with ID starting with 106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6 not found: ID does not exist" containerID="106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.656270 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6"} err="failed to get container status \"106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6\": rpc error: code = NotFound desc = could not find container \"106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6\": container with ID starting with 106b4c4a87352a0125b2bfc200efa43cb7bd2ab2119a68533690faccd4fca5c6 not found: ID does not exist" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.656306 4861 scope.go:117] "RemoveContainer" containerID="4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df" Feb 19 13:54:42 crc kubenswrapper[4861]: E0219 13:54:42.656774 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df\": container with ID starting with 4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df not found: ID does not exist" containerID="4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df" Feb 19 13:54:42 crc kubenswrapper[4861]: I0219 13:54:42.656819 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df"} err="failed to get container status \"4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df\": rpc error: code = NotFound desc = could not find container \"4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df\": container with ID starting with 4f77f84eeadba1e742d181186eae4f3911db4a44b689fd2d581293cd9ca446df not found: ID does not exist" Feb 19 13:54:43 crc kubenswrapper[4861]: I0219 13:54:43.990490 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00d4710-a045-474d-96c6-30be90a34289" path="/var/lib/kubelet/pods/e00d4710-a045-474d-96c6-30be90a34289/volumes" Feb 19 13:54:53 crc kubenswrapper[4861]: I0219 13:54:53.977628 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:54:53 crc kubenswrapper[4861]: E0219 13:54:53.979004 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 13:55:06 crc kubenswrapper[4861]: I0219 13:55:06.977246 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 13:55:07 crc kubenswrapper[4861]: I0219 13:55:07.781139 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"2233531fbf95858dfbe6b3512d8259bd0d7056ae4f5c8c661dab9eada06dd093"} Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.916938 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xgsmh"] Feb 19 13:55:27 crc kubenswrapper[4861]: E0219 13:55:27.919292 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="extract-content" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.919320 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="extract-content" Feb 19 13:55:27 crc kubenswrapper[4861]: E0219 13:55:27.919349 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="extract-utilities" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.919363 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="extract-utilities" Feb 19 13:55:27 crc kubenswrapper[4861]: E0219 13:55:27.919380 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="registry-server" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.919392 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="registry-server" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.919698 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00d4710-a045-474d-96c6-30be90a34289" containerName="registry-server" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.921562 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.939815 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgsmh"] Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.971202 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2psx\" (UniqueName: \"kubernetes.io/projected/b76517f3-fe6f-4270-892e-4ae6d48ee62a-kube-api-access-s2psx\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.971320 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-utilities\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:27 crc kubenswrapper[4861]: I0219 13:55:27.971443 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-catalog-content\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.072580 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-catalog-content\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.072695 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2psx\" (UniqueName: \"kubernetes.io/projected/b76517f3-fe6f-4270-892e-4ae6d48ee62a-kube-api-access-s2psx\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.072756 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-utilities\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.073334 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-catalog-content\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.073661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-utilities\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.110957 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2psx\" (UniqueName: \"kubernetes.io/projected/b76517f3-fe6f-4270-892e-4ae6d48ee62a-kube-api-access-s2psx\") pod \"redhat-marketplace-xgsmh\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.269778 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.745863 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgsmh"] Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.961883 4861 generic.go:334] "Generic (PLEG): container finished" podID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerID="23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb" exitCode=0 Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.961932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerDied","Data":"23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb"} Feb 19 13:55:28 crc kubenswrapper[4861]: I0219 13:55:28.961966 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerStarted","Data":"19ef22314adf8281a2815dbc105b8f275ee1ac4a691ca3c358e697091a785dc8"} Feb 19 13:55:29 crc kubenswrapper[4861]: I0219 13:55:29.973701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerStarted","Data":"90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258"} Feb 19 13:55:30 crc kubenswrapper[4861]: I0219 13:55:30.985783 4861 generic.go:334] "Generic (PLEG): container finished" podID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerID="90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258" exitCode=0 Feb 19 13:55:30 crc kubenswrapper[4861]: I0219 13:55:30.985868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerDied","Data":"90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258"} Feb 19 13:55:30 crc kubenswrapper[4861]: I0219 13:55:30.986045 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerStarted","Data":"8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76"} Feb 19 13:55:31 crc kubenswrapper[4861]: I0219 13:55:31.010730 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xgsmh" podStartSLOduration=2.609338321 podStartE2EDuration="4.010707054s" podCreationTimestamp="2026-02-19 13:55:27 +0000 UTC" firstStartedPulling="2026-02-19 13:55:28.964142765 +0000 UTC m=+2743.625246003" lastFinishedPulling="2026-02-19 13:55:30.365511508 +0000 UTC m=+2745.026614736" observedRunningTime="2026-02-19 13:55:31.010269702 +0000 UTC m=+2745.671372940" watchObservedRunningTime="2026-02-19 13:55:31.010707054 +0000 UTC m=+2745.671810302" Feb 19 13:55:38 crc kubenswrapper[4861]: I0219 13:55:38.270127 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:38 crc kubenswrapper[4861]: I0219 13:55:38.270948 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:38 crc kubenswrapper[4861]: I0219 13:55:38.334702 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:39 crc kubenswrapper[4861]: I0219 13:55:39.123694 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:39 crc kubenswrapper[4861]: I0219 13:55:39.190716 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgsmh"] Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.069164 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xgsmh" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="registry-server" containerID="cri-o://8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76" gracePeriod=2 Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.568783 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.721501 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2psx\" (UniqueName: \"kubernetes.io/projected/b76517f3-fe6f-4270-892e-4ae6d48ee62a-kube-api-access-s2psx\") pod \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.721578 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-catalog-content\") pod \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.721647 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-utilities\") pod \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\" (UID: \"b76517f3-fe6f-4270-892e-4ae6d48ee62a\") " Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.722838 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-utilities" (OuterVolumeSpecName: "utilities") pod "b76517f3-fe6f-4270-892e-4ae6d48ee62a" (UID: "b76517f3-fe6f-4270-892e-4ae6d48ee62a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.729939 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76517f3-fe6f-4270-892e-4ae6d48ee62a-kube-api-access-s2psx" (OuterVolumeSpecName: "kube-api-access-s2psx") pod "b76517f3-fe6f-4270-892e-4ae6d48ee62a" (UID: "b76517f3-fe6f-4270-892e-4ae6d48ee62a"). InnerVolumeSpecName "kube-api-access-s2psx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.751083 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b76517f3-fe6f-4270-892e-4ae6d48ee62a" (UID: "b76517f3-fe6f-4270-892e-4ae6d48ee62a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.824018 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2psx\" (UniqueName: \"kubernetes.io/projected/b76517f3-fe6f-4270-892e-4ae6d48ee62a-kube-api-access-s2psx\") on node \"crc\" DevicePath \"\"" Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.824074 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:55:41 crc kubenswrapper[4861]: I0219 13:55:41.824094 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b76517f3-fe6f-4270-892e-4ae6d48ee62a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.076374 4861 generic.go:334] "Generic (PLEG): container finished" podID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerID="8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76" exitCode=0 Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.076459 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerDied","Data":"8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76"} Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.076486 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgsmh" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.076510 4861 scope.go:117] "RemoveContainer" containerID="8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.076497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgsmh" event={"ID":"b76517f3-fe6f-4270-892e-4ae6d48ee62a","Type":"ContainerDied","Data":"19ef22314adf8281a2815dbc105b8f275ee1ac4a691ca3c358e697091a785dc8"} Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.095801 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgsmh"] Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.097814 4861 scope.go:117] "RemoveContainer" containerID="90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.102262 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgsmh"] Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.118147 4861 scope.go:117] "RemoveContainer" containerID="23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.142247 4861 scope.go:117] "RemoveContainer" containerID="8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76" Feb 19 13:55:42 crc kubenswrapper[4861]: E0219 13:55:42.143174 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76\": container with ID starting with 8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76 not found: ID does not exist" containerID="8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.143240 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76"} err="failed to get container status \"8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76\": rpc error: code = NotFound desc = could not find container \"8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76\": container with ID starting with 8c04d2d41bf1721a2169de936888e196e0363b8b2faf9c56acc3b84930044b76 not found: ID does not exist" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.143300 4861 scope.go:117] "RemoveContainer" containerID="90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258" Feb 19 13:55:42 crc kubenswrapper[4861]: E0219 13:55:42.143787 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258\": container with ID starting with 90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258 not found: ID does not exist" containerID="90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.143823 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258"} err="failed to get container status \"90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258\": rpc error: code = NotFound desc = could not find container \"90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258\": container with ID starting with 90affa10c99f381efa7792e411d6b01e02570237086d4986752849b8a80d0258 not found: ID does not exist" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.143838 4861 scope.go:117] "RemoveContainer" containerID="23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb" Feb 19 13:55:42 crc kubenswrapper[4861]: E0219 13:55:42.144280 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb\": container with ID starting with 23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb not found: ID does not exist" containerID="23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb" Feb 19 13:55:42 crc kubenswrapper[4861]: I0219 13:55:42.144335 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb"} err="failed to get container status \"23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb\": rpc error: code = NotFound desc = could not find container \"23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb\": container with ID starting with 23f69673cc8a1bee711559e5096e7c42a977d766deb28597a7a178b41788f0fb not found: ID does not exist" Feb 19 13:55:43 crc kubenswrapper[4861]: I0219 13:55:43.994052 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" path="/var/lib/kubelet/pods/b76517f3-fe6f-4270-892e-4ae6d48ee62a/volumes" Feb 19 13:57:33 crc kubenswrapper[4861]: I0219 13:57:33.834932 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:57:33 crc kubenswrapper[4861]: I0219 13:57:33.835935 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:58:03 crc kubenswrapper[4861]: I0219 13:58:03.834921 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:58:03 crc kubenswrapper[4861]: I0219 13:58:03.835663 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.562896 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t5vk7"] Feb 19 13:58:14 crc kubenswrapper[4861]: E0219 13:58:14.563808 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="extract-content" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.563825 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="extract-content" Feb 19 13:58:14 crc kubenswrapper[4861]: E0219 13:58:14.563852 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="extract-utilities" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.563862 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="extract-utilities" Feb 19 13:58:14 crc kubenswrapper[4861]: E0219 13:58:14.563882 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="registry-server" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.563890 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="registry-server" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.564095 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76517f3-fe6f-4270-892e-4ae6d48ee62a" containerName="registry-server" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.565380 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.580694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5vk7"] Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.751648 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-catalog-content\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.751769 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-utilities\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.751801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qtn\" (UniqueName: \"kubernetes.io/projected/95135208-98b1-4025-b1ed-cf2f215f8ca6-kube-api-access-l8qtn\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.853166 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-utilities\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.853259 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qtn\" (UniqueName: \"kubernetes.io/projected/95135208-98b1-4025-b1ed-cf2f215f8ca6-kube-api-access-l8qtn\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.853414 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-catalog-content\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.853803 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-utilities\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.854127 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-catalog-content\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.880137 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qtn\" (UniqueName: \"kubernetes.io/projected/95135208-98b1-4025-b1ed-cf2f215f8ca6-kube-api-access-l8qtn\") pod \"community-operators-t5vk7\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:14 crc kubenswrapper[4861]: I0219 13:58:14.898555 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:15 crc kubenswrapper[4861]: I0219 13:58:15.425656 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5vk7"] Feb 19 13:58:15 crc kubenswrapper[4861]: W0219 13:58:15.431879 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95135208_98b1_4025_b1ed_cf2f215f8ca6.slice/crio-b492e2a3a410583c334ccfa059d9ec8ee3f614ede6a42154df78f0d9b8513f39 WatchSource:0}: Error finding container b492e2a3a410583c334ccfa059d9ec8ee3f614ede6a42154df78f0d9b8513f39: Status 404 returned error can't find the container with id b492e2a3a410583c334ccfa059d9ec8ee3f614ede6a42154df78f0d9b8513f39 Feb 19 13:58:16 crc kubenswrapper[4861]: I0219 13:58:16.405233 4861 generic.go:334] "Generic (PLEG): container finished" podID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerID="7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327" exitCode=0 Feb 19 13:58:16 crc kubenswrapper[4861]: I0219 13:58:16.405519 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerDied","Data":"7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327"} Feb 19 13:58:16 crc kubenswrapper[4861]: I0219 13:58:16.405620 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerStarted","Data":"b492e2a3a410583c334ccfa059d9ec8ee3f614ede6a42154df78f0d9b8513f39"} Feb 19 13:58:16 crc kubenswrapper[4861]: I0219 13:58:16.410013 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:58:17 crc kubenswrapper[4861]: I0219 13:58:17.416406 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerStarted","Data":"e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1"} Feb 19 13:58:18 crc kubenswrapper[4861]: I0219 13:58:18.427799 4861 generic.go:334] "Generic (PLEG): container finished" podID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerID="e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1" exitCode=0 Feb 19 13:58:18 crc kubenswrapper[4861]: I0219 13:58:18.427884 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerDied","Data":"e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1"} Feb 19 13:58:19 crc kubenswrapper[4861]: I0219 13:58:19.439237 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerStarted","Data":"3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9"} Feb 19 13:58:19 crc kubenswrapper[4861]: I0219 13:58:19.467203 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t5vk7" podStartSLOduration=2.947013998 podStartE2EDuration="5.467174673s" podCreationTimestamp="2026-02-19 13:58:14 +0000 UTC" firstStartedPulling="2026-02-19 13:58:16.409748635 +0000 UTC m=+2911.070851873" lastFinishedPulling="2026-02-19 13:58:18.92990931 +0000 UTC m=+2913.591012548" observedRunningTime="2026-02-19 13:58:19.463617667 +0000 UTC m=+2914.124720935" watchObservedRunningTime="2026-02-19 13:58:19.467174673 +0000 UTC m=+2914.128277931" Feb 19 13:58:24 crc kubenswrapper[4861]: I0219 13:58:24.899123 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:24 crc kubenswrapper[4861]: I0219 13:58:24.899659 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:24 crc kubenswrapper[4861]: I0219 13:58:24.977627 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:25 crc kubenswrapper[4861]: I0219 13:58:25.549061 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:25 crc kubenswrapper[4861]: I0219 13:58:25.615277 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5vk7"] Feb 19 13:58:27 crc kubenswrapper[4861]: I0219 13:58:27.504146 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t5vk7" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="registry-server" containerID="cri-o://3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9" gracePeriod=2 Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.029338 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.161208 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-utilities\") pod \"95135208-98b1-4025-b1ed-cf2f215f8ca6\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.161489 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-catalog-content\") pod \"95135208-98b1-4025-b1ed-cf2f215f8ca6\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.161551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8qtn\" (UniqueName: \"kubernetes.io/projected/95135208-98b1-4025-b1ed-cf2f215f8ca6-kube-api-access-l8qtn\") pod \"95135208-98b1-4025-b1ed-cf2f215f8ca6\" (UID: \"95135208-98b1-4025-b1ed-cf2f215f8ca6\") " Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.161889 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-utilities" (OuterVolumeSpecName: "utilities") pod "95135208-98b1-4025-b1ed-cf2f215f8ca6" (UID: "95135208-98b1-4025-b1ed-cf2f215f8ca6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.162322 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.166715 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95135208-98b1-4025-b1ed-cf2f215f8ca6-kube-api-access-l8qtn" (OuterVolumeSpecName: "kube-api-access-l8qtn") pod "95135208-98b1-4025-b1ed-cf2f215f8ca6" (UID: "95135208-98b1-4025-b1ed-cf2f215f8ca6"). InnerVolumeSpecName "kube-api-access-l8qtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.263713 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8qtn\" (UniqueName: \"kubernetes.io/projected/95135208-98b1-4025-b1ed-cf2f215f8ca6-kube-api-access-l8qtn\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.516491 4861 generic.go:334] "Generic (PLEG): container finished" podID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerID="3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9" exitCode=0 Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.516553 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerDied","Data":"3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9"} Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.516604 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5vk7" event={"ID":"95135208-98b1-4025-b1ed-cf2f215f8ca6","Type":"ContainerDied","Data":"b492e2a3a410583c334ccfa059d9ec8ee3f614ede6a42154df78f0d9b8513f39"} Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.516633 4861 scope.go:117] "RemoveContainer" containerID="3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.516642 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5vk7" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.540002 4861 scope.go:117] "RemoveContainer" containerID="e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.580960 4861 scope.go:117] "RemoveContainer" containerID="7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.585850 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95135208-98b1-4025-b1ed-cf2f215f8ca6" (UID: "95135208-98b1-4025-b1ed-cf2f215f8ca6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.616154 4861 scope.go:117] "RemoveContainer" containerID="3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9" Feb 19 13:58:28 crc kubenswrapper[4861]: E0219 13:58:28.622628 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9\": container with ID starting with 3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9 not found: ID does not exist" containerID="3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.622697 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9"} err="failed to get container status \"3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9\": rpc error: code = NotFound desc = could not find container \"3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9\": container with ID starting with 3ee3e99c6ce06dda0c030aa049f7663ed81dbe8642eecd394f0ad597586aa7c9 not found: ID does not exist" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.622741 4861 scope.go:117] "RemoveContainer" containerID="e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1" Feb 19 13:58:28 crc kubenswrapper[4861]: E0219 13:58:28.623210 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1\": container with ID starting with e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1 not found: ID does not exist" containerID="e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.623249 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1"} err="failed to get container status \"e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1\": rpc error: code = NotFound desc = could not find container \"e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1\": container with ID starting with e1223b7bb0c2ac2e8d0b6c53d49098d4b2ed5149648c3dc94d224b9fe1974fe1 not found: ID does not exist" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.623275 4861 scope.go:117] "RemoveContainer" containerID="7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327" Feb 19 13:58:28 crc kubenswrapper[4861]: E0219 13:58:28.623662 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327\": container with ID starting with 7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327 not found: ID does not exist" containerID="7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.623693 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327"} err="failed to get container status \"7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327\": rpc error: code = NotFound desc = could not find container \"7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327\": container with ID starting with 7995965798c65948dfb6a53b6810405ad66897ea18d1cd8093ac490271d8c327 not found: ID does not exist" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.675406 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95135208-98b1-4025-b1ed-cf2f215f8ca6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.856966 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5vk7"] Feb 19 13:58:28 crc kubenswrapper[4861]: I0219 13:58:28.867539 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t5vk7"] Feb 19 13:58:29 crc kubenswrapper[4861]: I0219 13:58:29.993881 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" path="/var/lib/kubelet/pods/95135208-98b1-4025-b1ed-cf2f215f8ca6/volumes" Feb 19 13:58:33 crc kubenswrapper[4861]: I0219 13:58:33.834452 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:58:33 crc kubenswrapper[4861]: I0219 13:58:33.835296 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:58:33 crc kubenswrapper[4861]: I0219 13:58:33.835376 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 13:58:33 crc kubenswrapper[4861]: I0219 13:58:33.836416 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2233531fbf95858dfbe6b3512d8259bd0d7056ae4f5c8c661dab9eada06dd093"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:58:33 crc kubenswrapper[4861]: I0219 13:58:33.836692 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://2233531fbf95858dfbe6b3512d8259bd0d7056ae4f5c8c661dab9eada06dd093" gracePeriod=600 Feb 19 13:58:34 crc kubenswrapper[4861]: I0219 13:58:34.587924 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="2233531fbf95858dfbe6b3512d8259bd0d7056ae4f5c8c661dab9eada06dd093" exitCode=0 Feb 19 13:58:34 crc kubenswrapper[4861]: I0219 13:58:34.588002 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"2233531fbf95858dfbe6b3512d8259bd0d7056ae4f5c8c661dab9eada06dd093"} Feb 19 13:58:34 crc kubenswrapper[4861]: I0219 13:58:34.588396 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd"} Feb 19 13:58:34 crc kubenswrapper[4861]: I0219 13:58:34.588500 4861 scope.go:117] "RemoveContainer" containerID="590a99ae67b14b4574bf1fc98b13705ab453c0496ac858e125e529c6fa5c3421" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.168813 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp"] Feb 19 14:00:00 crc kubenswrapper[4861]: E0219 14:00:00.170035 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="extract-content" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.170065 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="extract-content" Feb 19 14:00:00 crc kubenswrapper[4861]: E0219 14:00:00.170153 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="extract-utilities" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.170169 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="extract-utilities" Feb 19 14:00:00 crc kubenswrapper[4861]: E0219 14:00:00.170180 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.170191 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.170416 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="95135208-98b1-4025-b1ed-cf2f215f8ca6" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.171146 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.174358 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.175549 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.180824 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp"] Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.318546 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dc07f64-3837-42e3-a7cf-1a99295110d6-secret-volume\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.318785 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dc07f64-3837-42e3-a7cf-1a99295110d6-config-volume\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.318981 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlg75\" (UniqueName: \"kubernetes.io/projected/5dc07f64-3837-42e3-a7cf-1a99295110d6-kube-api-access-dlg75\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.420722 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dc07f64-3837-42e3-a7cf-1a99295110d6-secret-volume\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.420841 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dc07f64-3837-42e3-a7cf-1a99295110d6-config-volume\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.420910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlg75\" (UniqueName: \"kubernetes.io/projected/5dc07f64-3837-42e3-a7cf-1a99295110d6-kube-api-access-dlg75\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.422900 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dc07f64-3837-42e3-a7cf-1a99295110d6-config-volume\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.430731 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dc07f64-3837-42e3-a7cf-1a99295110d6-secret-volume\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.451678 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlg75\" (UniqueName: \"kubernetes.io/projected/5dc07f64-3837-42e3-a7cf-1a99295110d6-kube-api-access-dlg75\") pod \"collect-profiles-29525160-b8qcp\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.502504 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:00 crc kubenswrapper[4861]: I0219 14:00:00.989206 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp"] Feb 19 14:00:00 crc kubenswrapper[4861]: W0219 14:00:00.993334 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dc07f64_3837_42e3_a7cf_1a99295110d6.slice/crio-fc0b41b7de539cf99a44a1250688b6fbd715a8167d69a3d15b560d14f81c056b WatchSource:0}: Error finding container fc0b41b7de539cf99a44a1250688b6fbd715a8167d69a3d15b560d14f81c056b: Status 404 returned error can't find the container with id fc0b41b7de539cf99a44a1250688b6fbd715a8167d69a3d15b560d14f81c056b Feb 19 14:00:01 crc kubenswrapper[4861]: I0219 14:00:01.400841 4861 generic.go:334] "Generic (PLEG): container finished" podID="5dc07f64-3837-42e3-a7cf-1a99295110d6" containerID="5d8f659b8cdb944567b3d3ad2fdb465a5d2a23eb4ec599744d1e124836cca6d0" exitCode=0 Feb 19 14:00:01 crc kubenswrapper[4861]: I0219 14:00:01.400917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" event={"ID":"5dc07f64-3837-42e3-a7cf-1a99295110d6","Type":"ContainerDied","Data":"5d8f659b8cdb944567b3d3ad2fdb465a5d2a23eb4ec599744d1e124836cca6d0"} Feb 19 14:00:01 crc kubenswrapper[4861]: I0219 14:00:01.401116 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" event={"ID":"5dc07f64-3837-42e3-a7cf-1a99295110d6","Type":"ContainerStarted","Data":"fc0b41b7de539cf99a44a1250688b6fbd715a8167d69a3d15b560d14f81c056b"} Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.758321 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.860469 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dc07f64-3837-42e3-a7cf-1a99295110d6-secret-volume\") pod \"5dc07f64-3837-42e3-a7cf-1a99295110d6\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.860565 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlg75\" (UniqueName: \"kubernetes.io/projected/5dc07f64-3837-42e3-a7cf-1a99295110d6-kube-api-access-dlg75\") pod \"5dc07f64-3837-42e3-a7cf-1a99295110d6\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.860670 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dc07f64-3837-42e3-a7cf-1a99295110d6-config-volume\") pod \"5dc07f64-3837-42e3-a7cf-1a99295110d6\" (UID: \"5dc07f64-3837-42e3-a7cf-1a99295110d6\") " Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.861807 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc07f64-3837-42e3-a7cf-1a99295110d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "5dc07f64-3837-42e3-a7cf-1a99295110d6" (UID: "5dc07f64-3837-42e3-a7cf-1a99295110d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.868659 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc07f64-3837-42e3-a7cf-1a99295110d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5dc07f64-3837-42e3-a7cf-1a99295110d6" (UID: "5dc07f64-3837-42e3-a7cf-1a99295110d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.869541 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc07f64-3837-42e3-a7cf-1a99295110d6-kube-api-access-dlg75" (OuterVolumeSpecName: "kube-api-access-dlg75") pod "5dc07f64-3837-42e3-a7cf-1a99295110d6" (UID: "5dc07f64-3837-42e3-a7cf-1a99295110d6"). InnerVolumeSpecName "kube-api-access-dlg75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.962748 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlg75\" (UniqueName: \"kubernetes.io/projected/5dc07f64-3837-42e3-a7cf-1a99295110d6-kube-api-access-dlg75\") on node \"crc\" DevicePath \"\"" Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.962802 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dc07f64-3837-42e3-a7cf-1a99295110d6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:00:02 crc kubenswrapper[4861]: I0219 14:00:02.962822 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dc07f64-3837-42e3-a7cf-1a99295110d6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:00:03 crc kubenswrapper[4861]: I0219 14:00:03.422653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" event={"ID":"5dc07f64-3837-42e3-a7cf-1a99295110d6","Type":"ContainerDied","Data":"fc0b41b7de539cf99a44a1250688b6fbd715a8167d69a3d15b560d14f81c056b"} Feb 19 14:00:03 crc kubenswrapper[4861]: I0219 14:00:03.422715 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc0b41b7de539cf99a44a1250688b6fbd715a8167d69a3d15b560d14f81c056b" Feb 19 14:00:03 crc kubenswrapper[4861]: I0219 14:00:03.422798 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp" Feb 19 14:00:03 crc kubenswrapper[4861]: I0219 14:00:03.860026 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp"] Feb 19 14:00:03 crc kubenswrapper[4861]: I0219 14:00:03.870267 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-zjkrp"] Feb 19 14:00:03 crc kubenswrapper[4861]: I0219 14:00:03.994623 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5761329-3f84-46d6-a0de-ca8addac06ec" path="/var/lib/kubelet/pods/e5761329-3f84-46d6-a0de-ca8addac06ec/volumes" Feb 19 14:01:00 crc kubenswrapper[4861]: I0219 14:01:00.662747 4861 scope.go:117] "RemoveContainer" containerID="35a0405bec97e5b45fc6f77e5f05f5fcd2997c41954e61034249a3429e2b1db4" Feb 19 14:01:03 crc kubenswrapper[4861]: I0219 14:01:03.835414 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:01:03 crc kubenswrapper[4861]: I0219 14:01:03.836055 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:01:33 crc kubenswrapper[4861]: I0219 14:01:33.834013 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:01:33 crc kubenswrapper[4861]: I0219 14:01:33.835416 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:02:03 crc kubenswrapper[4861]: I0219 14:02:03.834821 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:02:03 crc kubenswrapper[4861]: I0219 14:02:03.835607 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:02:03 crc kubenswrapper[4861]: I0219 14:02:03.835709 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:02:03 crc kubenswrapper[4861]: I0219 14:02:03.836663 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:02:03 crc kubenswrapper[4861]: I0219 14:02:03.836762 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" gracePeriod=600 Feb 19 14:02:03 crc kubenswrapper[4861]: E0219 14:02:03.974517 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:02:04 crc kubenswrapper[4861]: I0219 14:02:04.740582 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" exitCode=0 Feb 19 14:02:04 crc kubenswrapper[4861]: I0219 14:02:04.741355 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd"} Feb 19 14:02:04 crc kubenswrapper[4861]: I0219 14:02:04.741521 4861 scope.go:117] "RemoveContainer" containerID="2233531fbf95858dfbe6b3512d8259bd0d7056ae4f5c8c661dab9eada06dd093" Feb 19 14:02:04 crc kubenswrapper[4861]: I0219 14:02:04.742257 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:02:04 crc kubenswrapper[4861]: E0219 14:02:04.742715 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:02:16 crc kubenswrapper[4861]: I0219 14:02:16.001635 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:02:16 crc kubenswrapper[4861]: E0219 14:02:16.002986 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.292456 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6cpjg"] Feb 19 14:02:20 crc kubenswrapper[4861]: E0219 14:02:20.293194 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc07f64-3837-42e3-a7cf-1a99295110d6" containerName="collect-profiles" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.293209 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc07f64-3837-42e3-a7cf-1a99295110d6" containerName="collect-profiles" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.293377 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc07f64-3837-42e3-a7cf-1a99295110d6" containerName="collect-profiles" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.294516 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.316676 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6cpjg"] Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.451499 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-utilities\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.451852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-catalog-content\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.451975 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whdp\" (UniqueName: \"kubernetes.io/projected/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-kube-api-access-4whdp\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.553587 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-utilities\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.553698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-catalog-content\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.553755 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whdp\" (UniqueName: \"kubernetes.io/projected/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-kube-api-access-4whdp\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.554810 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-catalog-content\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.555158 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-utilities\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.597737 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whdp\" (UniqueName: \"kubernetes.io/projected/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-kube-api-access-4whdp\") pod \"certified-operators-6cpjg\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:20 crc kubenswrapper[4861]: I0219 14:02:20.632868 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:21 crc kubenswrapper[4861]: I0219 14:02:21.235939 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6cpjg"] Feb 19 14:02:21 crc kubenswrapper[4861]: I0219 14:02:21.904509 4861 generic.go:334] "Generic (PLEG): container finished" podID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerID="f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85" exitCode=0 Feb 19 14:02:21 crc kubenswrapper[4861]: I0219 14:02:21.904773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cpjg" event={"ID":"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd","Type":"ContainerDied","Data":"f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85"} Feb 19 14:02:21 crc kubenswrapper[4861]: I0219 14:02:21.904929 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cpjg" event={"ID":"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd","Type":"ContainerStarted","Data":"1aa70c78ad74ef739ea97fb9bcd28104627a8b51cd709e990260ba8b166f9650"} Feb 19 14:02:23 crc kubenswrapper[4861]: I0219 14:02:23.926283 4861 generic.go:334] "Generic (PLEG): container finished" podID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerID="4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528" exitCode=0 Feb 19 14:02:23 crc kubenswrapper[4861]: I0219 14:02:23.926388 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cpjg" event={"ID":"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd","Type":"ContainerDied","Data":"4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528"} Feb 19 14:02:24 crc kubenswrapper[4861]: I0219 14:02:24.942469 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cpjg" event={"ID":"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd","Type":"ContainerStarted","Data":"0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590"} Feb 19 14:02:24 crc kubenswrapper[4861]: I0219 14:02:24.966627 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6cpjg" podStartSLOduration=2.535992525 podStartE2EDuration="4.966602558s" podCreationTimestamp="2026-02-19 14:02:20 +0000 UTC" firstStartedPulling="2026-02-19 14:02:21.906541693 +0000 UTC m=+3156.567644931" lastFinishedPulling="2026-02-19 14:02:24.337151696 +0000 UTC m=+3158.998254964" observedRunningTime="2026-02-19 14:02:24.963725861 +0000 UTC m=+3159.624829099" watchObservedRunningTime="2026-02-19 14:02:24.966602558 +0000 UTC m=+3159.627705806" Feb 19 14:02:28 crc kubenswrapper[4861]: I0219 14:02:28.977173 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:02:28 crc kubenswrapper[4861]: E0219 14:02:28.977975 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:02:30 crc kubenswrapper[4861]: I0219 14:02:30.633779 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:30 crc kubenswrapper[4861]: I0219 14:02:30.633868 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:30 crc kubenswrapper[4861]: I0219 14:02:30.715023 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:31 crc kubenswrapper[4861]: I0219 14:02:31.069402 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:31 crc kubenswrapper[4861]: I0219 14:02:31.130141 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6cpjg"] Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.008938 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6cpjg" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="registry-server" containerID="cri-o://0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590" gracePeriod=2 Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.476811 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.554665 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-catalog-content\") pod \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.555081 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whdp\" (UniqueName: \"kubernetes.io/projected/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-kube-api-access-4whdp\") pod \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.555132 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-utilities\") pod \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\" (UID: \"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd\") " Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.556022 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-utilities" (OuterVolumeSpecName: "utilities") pod "2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" (UID: "2cbec96e-c0f1-4b69-8396-cc3941bfb3cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.559445 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-kube-api-access-4whdp" (OuterVolumeSpecName: "kube-api-access-4whdp") pod "2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" (UID: "2cbec96e-c0f1-4b69-8396-cc3941bfb3cd"). InnerVolumeSpecName "kube-api-access-4whdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.626678 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" (UID: "2cbec96e-c0f1-4b69-8396-cc3941bfb3cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.657135 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whdp\" (UniqueName: \"kubernetes.io/projected/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-kube-api-access-4whdp\") on node \"crc\" DevicePath \"\"" Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.657176 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:02:33 crc kubenswrapper[4861]: I0219 14:02:33.657188 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.017473 4861 generic.go:334] "Generic (PLEG): container finished" podID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerID="0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590" exitCode=0 Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.017536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cpjg" event={"ID":"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd","Type":"ContainerDied","Data":"0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590"} Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.017584 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cpjg" event={"ID":"2cbec96e-c0f1-4b69-8396-cc3941bfb3cd","Type":"ContainerDied","Data":"1aa70c78ad74ef739ea97fb9bcd28104627a8b51cd709e990260ba8b166f9650"} Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.017609 4861 scope.go:117] "RemoveContainer" containerID="0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.017538 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cpjg" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.062601 4861 scope.go:117] "RemoveContainer" containerID="4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.063976 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6cpjg"] Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.068980 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6cpjg"] Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.136015 4861 scope.go:117] "RemoveContainer" containerID="f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.155846 4861 scope.go:117] "RemoveContainer" containerID="0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590" Feb 19 14:02:34 crc kubenswrapper[4861]: E0219 14:02:34.156309 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590\": container with ID starting with 0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590 not found: ID does not exist" containerID="0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.156376 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590"} err="failed to get container status \"0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590\": rpc error: code = NotFound desc = could not find container \"0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590\": container with ID starting with 0665e9082663fdbb391f0973a1ecffc9b108dfd32154c4d29b090281de503590 not found: ID does not exist" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.156431 4861 scope.go:117] "RemoveContainer" containerID="4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528" Feb 19 14:02:34 crc kubenswrapper[4861]: E0219 14:02:34.156878 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528\": container with ID starting with 4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528 not found: ID does not exist" containerID="4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.156917 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528"} err="failed to get container status \"4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528\": rpc error: code = NotFound desc = could not find container \"4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528\": container with ID starting with 4b5cd2a9560f20740005dd394a01679ef67fd5d36b897729830c33c66ca47528 not found: ID does not exist" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.156946 4861 scope.go:117] "RemoveContainer" containerID="f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85" Feb 19 14:02:34 crc kubenswrapper[4861]: E0219 14:02:34.157322 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85\": container with ID starting with f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85 not found: ID does not exist" containerID="f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85" Feb 19 14:02:34 crc kubenswrapper[4861]: I0219 14:02:34.157353 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85"} err="failed to get container status \"f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85\": rpc error: code = NotFound desc = could not find container \"f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85\": container with ID starting with f01042e19123afbac38b7ced7361ddd6f0633d8d9409b95063414af39bfc6d85 not found: ID does not exist" Feb 19 14:02:35 crc kubenswrapper[4861]: I0219 14:02:35.988775 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" path="/var/lib/kubelet/pods/2cbec96e-c0f1-4b69-8396-cc3941bfb3cd/volumes" Feb 19 14:02:42 crc kubenswrapper[4861]: I0219 14:02:42.977094 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:02:42 crc kubenswrapper[4861]: E0219 14:02:42.978755 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:02:56 crc kubenswrapper[4861]: I0219 14:02:56.976591 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:02:56 crc kubenswrapper[4861]: E0219 14:02:56.977567 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:03:10 crc kubenswrapper[4861]: I0219 14:03:10.977157 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:03:10 crc kubenswrapper[4861]: E0219 14:03:10.978077 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:03:23 crc kubenswrapper[4861]: I0219 14:03:23.977332 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:03:23 crc kubenswrapper[4861]: E0219 14:03:23.978144 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:03:34 crc kubenswrapper[4861]: I0219 14:03:34.977342 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:03:34 crc kubenswrapper[4861]: E0219 14:03:34.978267 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:03:49 crc kubenswrapper[4861]: I0219 14:03:49.977414 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:03:49 crc kubenswrapper[4861]: E0219 14:03:49.978448 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:04:03 crc kubenswrapper[4861]: I0219 14:04:03.977337 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:04:03 crc kubenswrapper[4861]: E0219 14:04:03.978188 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:04:16 crc kubenswrapper[4861]: I0219 14:04:16.977733 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:04:16 crc kubenswrapper[4861]: E0219 14:04:16.980025 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:04:31 crc kubenswrapper[4861]: I0219 14:04:31.977305 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:04:31 crc kubenswrapper[4861]: E0219 14:04:31.978484 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:04:42 crc kubenswrapper[4861]: I0219 14:04:42.979982 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:04:42 crc kubenswrapper[4861]: E0219 14:04:42.981122 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.597218 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4czdd"] Feb 19 14:04:53 crc kubenswrapper[4861]: E0219 14:04:53.598251 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="registry-server" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.598284 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="registry-server" Feb 19 14:04:53 crc kubenswrapper[4861]: E0219 14:04:53.598316 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="extract-content" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.598333 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="extract-content" Feb 19 14:04:53 crc kubenswrapper[4861]: E0219 14:04:53.598383 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="extract-utilities" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.598402 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="extract-utilities" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.598750 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbec96e-c0f1-4b69-8396-cc3941bfb3cd" containerName="registry-server" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.601086 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.614316 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4czdd"] Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.699368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-utilities\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.699440 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtb9\" (UniqueName: \"kubernetes.io/projected/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-kube-api-access-gwtb9\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.699474 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-catalog-content\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.800403 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-utilities\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.800484 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtb9\" (UniqueName: \"kubernetes.io/projected/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-kube-api-access-gwtb9\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.800524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-catalog-content\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.801031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-catalog-content\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.801346 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-utilities\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.823556 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtb9\" (UniqueName: \"kubernetes.io/projected/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-kube-api-access-gwtb9\") pod \"redhat-operators-4czdd\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:53 crc kubenswrapper[4861]: I0219 14:04:53.937598 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:04:54 crc kubenswrapper[4861]: I0219 14:04:54.378993 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4czdd"] Feb 19 14:04:55 crc kubenswrapper[4861]: I0219 14:04:55.322511 4861 generic.go:334] "Generic (PLEG): container finished" podID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerID="e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27" exitCode=0 Feb 19 14:04:55 crc kubenswrapper[4861]: I0219 14:04:55.322781 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4czdd" event={"ID":"829fc1c5-76fb-49d1-a9a2-c413e33adc8c","Type":"ContainerDied","Data":"e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27"} Feb 19 14:04:55 crc kubenswrapper[4861]: I0219 14:04:55.322814 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4czdd" event={"ID":"829fc1c5-76fb-49d1-a9a2-c413e33adc8c","Type":"ContainerStarted","Data":"a50847439de34484673a77498558fe4550b4928b22151eeb140fbe41c35b7f5a"} Feb 19 14:04:55 crc kubenswrapper[4861]: I0219 14:04:55.326500 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:04:56 crc kubenswrapper[4861]: I0219 14:04:56.977238 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:04:56 crc kubenswrapper[4861]: E0219 14:04:56.977783 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:04:57 crc kubenswrapper[4861]: I0219 14:04:57.342285 4861 generic.go:334] "Generic (PLEG): container finished" podID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerID="dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137" exitCode=0 Feb 19 14:04:57 crc kubenswrapper[4861]: I0219 14:04:57.342325 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4czdd" event={"ID":"829fc1c5-76fb-49d1-a9a2-c413e33adc8c","Type":"ContainerDied","Data":"dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137"} Feb 19 14:04:58 crc kubenswrapper[4861]: I0219 14:04:58.356514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4czdd" event={"ID":"829fc1c5-76fb-49d1-a9a2-c413e33adc8c","Type":"ContainerStarted","Data":"0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927"} Feb 19 14:04:58 crc kubenswrapper[4861]: I0219 14:04:58.389442 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4czdd" podStartSLOduration=2.970012053 podStartE2EDuration="5.389390501s" podCreationTimestamp="2026-02-19 14:04:53 +0000 UTC" firstStartedPulling="2026-02-19 14:04:55.326207013 +0000 UTC m=+3309.987310251" lastFinishedPulling="2026-02-19 14:04:57.745585441 +0000 UTC m=+3312.406688699" observedRunningTime="2026-02-19 14:04:58.38411516 +0000 UTC m=+3313.045218428" watchObservedRunningTime="2026-02-19 14:04:58.389390501 +0000 UTC m=+3313.050493739" Feb 19 14:05:03 crc kubenswrapper[4861]: I0219 14:05:03.938382 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:05:03 crc kubenswrapper[4861]: I0219 14:05:03.939002 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:05:04 crc kubenswrapper[4861]: I0219 14:05:04.983845 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4czdd" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="registry-server" probeResult="failure" output=< Feb 19 14:05:04 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:05:04 crc kubenswrapper[4861]: > Feb 19 14:05:09 crc kubenswrapper[4861]: I0219 14:05:09.977609 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:05:09 crc kubenswrapper[4861]: E0219 14:05:09.978316 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:05:14 crc kubenswrapper[4861]: I0219 14:05:13.999555 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:05:14 crc kubenswrapper[4861]: I0219 14:05:14.049256 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:05:14 crc kubenswrapper[4861]: I0219 14:05:14.239963 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4czdd"] Feb 19 14:05:15 crc kubenswrapper[4861]: I0219 14:05:15.504151 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4czdd" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="registry-server" containerID="cri-o://0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927" gracePeriod=2 Feb 19 14:05:15 crc kubenswrapper[4861]: I0219 14:05:15.965619 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.014637 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwtb9\" (UniqueName: \"kubernetes.io/projected/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-kube-api-access-gwtb9\") pod \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.014805 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-utilities\") pod \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.014852 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-catalog-content\") pod \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\" (UID: \"829fc1c5-76fb-49d1-a9a2-c413e33adc8c\") " Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.015722 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-utilities" (OuterVolumeSpecName: "utilities") pod "829fc1c5-76fb-49d1-a9a2-c413e33adc8c" (UID: "829fc1c5-76fb-49d1-a9a2-c413e33adc8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.020970 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-kube-api-access-gwtb9" (OuterVolumeSpecName: "kube-api-access-gwtb9") pod "829fc1c5-76fb-49d1-a9a2-c413e33adc8c" (UID: "829fc1c5-76fb-49d1-a9a2-c413e33adc8c"). InnerVolumeSpecName "kube-api-access-gwtb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.116771 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwtb9\" (UniqueName: \"kubernetes.io/projected/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-kube-api-access-gwtb9\") on node \"crc\" DevicePath \"\"" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.116821 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.139525 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "829fc1c5-76fb-49d1-a9a2-c413e33adc8c" (UID: "829fc1c5-76fb-49d1-a9a2-c413e33adc8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.217942 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fc1c5-76fb-49d1-a9a2-c413e33adc8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.518495 4861 generic.go:334] "Generic (PLEG): container finished" podID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerID="0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927" exitCode=0 Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.518555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4czdd" event={"ID":"829fc1c5-76fb-49d1-a9a2-c413e33adc8c","Type":"ContainerDied","Data":"0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927"} Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.519009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4czdd" event={"ID":"829fc1c5-76fb-49d1-a9a2-c413e33adc8c","Type":"ContainerDied","Data":"a50847439de34484673a77498558fe4550b4928b22151eeb140fbe41c35b7f5a"} Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.519052 4861 scope.go:117] "RemoveContainer" containerID="0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.518662 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4czdd" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.549977 4861 scope.go:117] "RemoveContainer" containerID="dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.579911 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4czdd"] Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.603752 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4czdd"] Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.605700 4861 scope.go:117] "RemoveContainer" containerID="e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.646790 4861 scope.go:117] "RemoveContainer" containerID="0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927" Feb 19 14:05:16 crc kubenswrapper[4861]: E0219 14:05:16.652614 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927\": container with ID starting with 0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927 not found: ID does not exist" containerID="0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.652687 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927"} err="failed to get container status \"0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927\": rpc error: code = NotFound desc = could not find container \"0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927\": container with ID starting with 0989a5848adb683cfd37ff624dbbc8a88f4898f421b7c9c2996a254942587927 not found: ID does not exist" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.652730 4861 scope.go:117] "RemoveContainer" containerID="dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137" Feb 19 14:05:16 crc kubenswrapper[4861]: E0219 14:05:16.656998 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137\": container with ID starting with dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137 not found: ID does not exist" containerID="dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.657038 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137"} err="failed to get container status \"dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137\": rpc error: code = NotFound desc = could not find container \"dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137\": container with ID starting with dd14ec14a58aa7e80fbfc80ccaef5a441502fe5742fdf0776038400b2dc35137 not found: ID does not exist" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.657063 4861 scope.go:117] "RemoveContainer" containerID="e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27" Feb 19 14:05:16 crc kubenswrapper[4861]: E0219 14:05:16.657358 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27\": container with ID starting with e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27 not found: ID does not exist" containerID="e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27" Feb 19 14:05:16 crc kubenswrapper[4861]: I0219 14:05:16.657391 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27"} err="failed to get container status \"e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27\": rpc error: code = NotFound desc = could not find container \"e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27\": container with ID starting with e8ca77bf6fbaf1236689db019f41ba1063479374000bdbe4edab9be03dfbda27 not found: ID does not exist" Feb 19 14:05:17 crc kubenswrapper[4861]: I0219 14:05:17.988257 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" path="/var/lib/kubelet/pods/829fc1c5-76fb-49d1-a9a2-c413e33adc8c/volumes" Feb 19 14:05:22 crc kubenswrapper[4861]: I0219 14:05:22.977821 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:05:22 crc kubenswrapper[4861]: E0219 14:05:22.979022 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:05:36 crc kubenswrapper[4861]: I0219 14:05:36.978223 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:05:36 crc kubenswrapper[4861]: E0219 14:05:36.979536 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:05:49 crc kubenswrapper[4861]: I0219 14:05:49.977122 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:05:49 crc kubenswrapper[4861]: E0219 14:05:49.978105 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:06:02 crc kubenswrapper[4861]: I0219 14:06:02.977643 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:06:02 crc kubenswrapper[4861]: E0219 14:06:02.978562 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:06:16 crc kubenswrapper[4861]: I0219 14:06:16.977241 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:06:16 crc kubenswrapper[4861]: E0219 14:06:16.978293 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:06:28 crc kubenswrapper[4861]: I0219 14:06:28.977190 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:06:28 crc kubenswrapper[4861]: E0219 14:06:28.978141 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:06:41 crc kubenswrapper[4861]: I0219 14:06:41.977536 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:06:41 crc kubenswrapper[4861]: E0219 14:06:41.978583 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:06:56 crc kubenswrapper[4861]: I0219 14:06:56.977866 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:06:56 crc kubenswrapper[4861]: E0219 14:06:56.979013 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:07:07 crc kubenswrapper[4861]: I0219 14:07:07.977898 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:07:08 crc kubenswrapper[4861]: I0219 14:07:08.562371 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"b618bc46fa12701f1809fdf82a23eedbd2deae191f3a45e0501eba1fbb1d21f8"} Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.296048 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dgkn"] Feb 19 14:08:54 crc kubenswrapper[4861]: E0219 14:08:54.297073 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="extract-content" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.297095 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="extract-content" Feb 19 14:08:54 crc kubenswrapper[4861]: E0219 14:08:54.297113 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="extract-utilities" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.297121 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="extract-utilities" Feb 19 14:08:54 crc kubenswrapper[4861]: E0219 14:08:54.297140 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="registry-server" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.297150 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="registry-server" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.297374 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="829fc1c5-76fb-49d1-a9a2-c413e33adc8c" containerName="registry-server" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.303599 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.321931 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dgkn"] Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.392265 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-utilities\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.392317 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kcl\" (UniqueName: \"kubernetes.io/projected/fd239ff4-4555-406a-99ea-9cfd4720d836-kube-api-access-g4kcl\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.392361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-catalog-content\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.475207 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcn4"] Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.477341 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.491652 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcn4"] Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.495208 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-catalog-content\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.495261 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kcl\" (UniqueName: \"kubernetes.io/projected/fd239ff4-4555-406a-99ea-9cfd4720d836-kube-api-access-g4kcl\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.495289 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787h5\" (UniqueName: \"kubernetes.io/projected/1f14218c-279f-49dc-8759-9b538e679679-kube-api-access-787h5\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.495338 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-utilities\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.495519 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-catalog-content\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.495671 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-utilities\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.496346 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-utilities\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.497719 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-catalog-content\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.536413 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kcl\" (UniqueName: \"kubernetes.io/projected/fd239ff4-4555-406a-99ea-9cfd4720d836-kube-api-access-g4kcl\") pod \"community-operators-7dgkn\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.597407 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-catalog-content\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.597487 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-787h5\" (UniqueName: \"kubernetes.io/projected/1f14218c-279f-49dc-8759-9b538e679679-kube-api-access-787h5\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.597527 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-utilities\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.598077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-catalog-content\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.598097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-utilities\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.621806 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-787h5\" (UniqueName: \"kubernetes.io/projected/1f14218c-279f-49dc-8759-9b538e679679-kube-api-access-787h5\") pod \"redhat-marketplace-6qcn4\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.636508 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:08:54 crc kubenswrapper[4861]: I0219 14:08:54.847989 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:08:55 crc kubenswrapper[4861]: I0219 14:08:55.210218 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dgkn"] Feb 19 14:08:55 crc kubenswrapper[4861]: I0219 14:08:55.302663 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcn4"] Feb 19 14:08:55 crc kubenswrapper[4861]: W0219 14:08:55.307442 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f14218c_279f_49dc_8759_9b538e679679.slice/crio-6460a14e098be91bd97f4406e44c8556b94218f51ae940086cbd44ab96723e0e WatchSource:0}: Error finding container 6460a14e098be91bd97f4406e44c8556b94218f51ae940086cbd44ab96723e0e: Status 404 returned error can't find the container with id 6460a14e098be91bd97f4406e44c8556b94218f51ae940086cbd44ab96723e0e Feb 19 14:08:55 crc kubenswrapper[4861]: I0219 14:08:55.513543 4861 generic.go:334] "Generic (PLEG): container finished" podID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerID="7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99" exitCode=0 Feb 19 14:08:55 crc kubenswrapper[4861]: I0219 14:08:55.513641 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerDied","Data":"7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99"} Feb 19 14:08:55 crc kubenswrapper[4861]: I0219 14:08:55.513675 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerStarted","Data":"3bb90f205d454df9b61c04529bf92dae109509325f1d335d21a8ebe314954a6f"} Feb 19 14:08:55 crc kubenswrapper[4861]: I0219 14:08:55.515273 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerStarted","Data":"6460a14e098be91bd97f4406e44c8556b94218f51ae940086cbd44ab96723e0e"} Feb 19 14:08:56 crc kubenswrapper[4861]: I0219 14:08:56.525661 4861 generic.go:334] "Generic (PLEG): container finished" podID="1f14218c-279f-49dc-8759-9b538e679679" containerID="2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e" exitCode=0 Feb 19 14:08:56 crc kubenswrapper[4861]: I0219 14:08:56.525788 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerDied","Data":"2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e"} Feb 19 14:08:56 crc kubenswrapper[4861]: I0219 14:08:56.529263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerStarted","Data":"4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76"} Feb 19 14:08:57 crc kubenswrapper[4861]: I0219 14:08:57.539927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerStarted","Data":"8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de"} Feb 19 14:08:57 crc kubenswrapper[4861]: I0219 14:08:57.542999 4861 generic.go:334] "Generic (PLEG): container finished" podID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerID="4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76" exitCode=0 Feb 19 14:08:57 crc kubenswrapper[4861]: I0219 14:08:57.543046 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerDied","Data":"4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76"} Feb 19 14:08:58 crc kubenswrapper[4861]: I0219 14:08:58.569847 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerStarted","Data":"2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7"} Feb 19 14:08:58 crc kubenswrapper[4861]: I0219 14:08:58.576496 4861 generic.go:334] "Generic (PLEG): container finished" podID="1f14218c-279f-49dc-8759-9b538e679679" containerID="8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de" exitCode=0 Feb 19 14:08:58 crc kubenswrapper[4861]: I0219 14:08:58.576558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerDied","Data":"8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de"} Feb 19 14:08:58 crc kubenswrapper[4861]: I0219 14:08:58.606672 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dgkn" podStartSLOduration=2.210102189 podStartE2EDuration="4.606643482s" podCreationTimestamp="2026-02-19 14:08:54 +0000 UTC" firstStartedPulling="2026-02-19 14:08:55.51593772 +0000 UTC m=+3550.177040948" lastFinishedPulling="2026-02-19 14:08:57.912478973 +0000 UTC m=+3552.573582241" observedRunningTime="2026-02-19 14:08:58.598152353 +0000 UTC m=+3553.259255651" watchObservedRunningTime="2026-02-19 14:08:58.606643482 +0000 UTC m=+3553.267746750" Feb 19 14:08:59 crc kubenswrapper[4861]: I0219 14:08:59.585302 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerStarted","Data":"cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf"} Feb 19 14:08:59 crc kubenswrapper[4861]: I0219 14:08:59.606001 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qcn4" podStartSLOduration=3.130682095 podStartE2EDuration="5.605985896s" podCreationTimestamp="2026-02-19 14:08:54 +0000 UTC" firstStartedPulling="2026-02-19 14:08:56.527265406 +0000 UTC m=+3551.188368674" lastFinishedPulling="2026-02-19 14:08:59.002569247 +0000 UTC m=+3553.663672475" observedRunningTime="2026-02-19 14:08:59.601985028 +0000 UTC m=+3554.263088256" watchObservedRunningTime="2026-02-19 14:08:59.605985896 +0000 UTC m=+3554.267089124" Feb 19 14:09:04 crc kubenswrapper[4861]: I0219 14:09:04.637005 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:09:04 crc kubenswrapper[4861]: I0219 14:09:04.637549 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:09:04 crc kubenswrapper[4861]: I0219 14:09:04.715941 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:09:04 crc kubenswrapper[4861]: I0219 14:09:04.849876 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:09:04 crc kubenswrapper[4861]: I0219 14:09:04.849957 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:09:04 crc kubenswrapper[4861]: I0219 14:09:04.926413 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:09:05 crc kubenswrapper[4861]: I0219 14:09:05.681802 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:09:05 crc kubenswrapper[4861]: I0219 14:09:05.715773 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:09:06 crc kubenswrapper[4861]: I0219 14:09:06.162648 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcn4"] Feb 19 14:09:07 crc kubenswrapper[4861]: I0219 14:09:07.651367 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qcn4" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="registry-server" containerID="cri-o://cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf" gracePeriod=2 Feb 19 14:09:07 crc kubenswrapper[4861]: I0219 14:09:07.964558 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dgkn"] Feb 19 14:09:07 crc kubenswrapper[4861]: I0219 14:09:07.965116 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dgkn" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="registry-server" containerID="cri-o://2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7" gracePeriod=2 Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.156208 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.272096 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-catalog-content\") pod \"1f14218c-279f-49dc-8759-9b538e679679\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.272392 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-787h5\" (UniqueName: \"kubernetes.io/projected/1f14218c-279f-49dc-8759-9b538e679679-kube-api-access-787h5\") pod \"1f14218c-279f-49dc-8759-9b538e679679\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.272473 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-utilities\") pod \"1f14218c-279f-49dc-8759-9b538e679679\" (UID: \"1f14218c-279f-49dc-8759-9b538e679679\") " Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.273410 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-utilities" (OuterVolumeSpecName: "utilities") pod "1f14218c-279f-49dc-8759-9b538e679679" (UID: "1f14218c-279f-49dc-8759-9b538e679679"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.289265 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f14218c-279f-49dc-8759-9b538e679679-kube-api-access-787h5" (OuterVolumeSpecName: "kube-api-access-787h5") pod "1f14218c-279f-49dc-8759-9b538e679679" (UID: "1f14218c-279f-49dc-8759-9b538e679679"). InnerVolumeSpecName "kube-api-access-787h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.309313 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f14218c-279f-49dc-8759-9b538e679679" (UID: "1f14218c-279f-49dc-8759-9b538e679679"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.321965 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.374041 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.374091 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-787h5\" (UniqueName: \"kubernetes.io/projected/1f14218c-279f-49dc-8759-9b538e679679-kube-api-access-787h5\") on node \"crc\" DevicePath \"\"" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.374108 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14218c-279f-49dc-8759-9b538e679679-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.475245 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-utilities\") pod \"fd239ff4-4555-406a-99ea-9cfd4720d836\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.475377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-catalog-content\") pod \"fd239ff4-4555-406a-99ea-9cfd4720d836\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.475445 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4kcl\" (UniqueName: \"kubernetes.io/projected/fd239ff4-4555-406a-99ea-9cfd4720d836-kube-api-access-g4kcl\") pod \"fd239ff4-4555-406a-99ea-9cfd4720d836\" (UID: \"fd239ff4-4555-406a-99ea-9cfd4720d836\") " Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.477874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-utilities" (OuterVolumeSpecName: "utilities") pod "fd239ff4-4555-406a-99ea-9cfd4720d836" (UID: "fd239ff4-4555-406a-99ea-9cfd4720d836"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.481073 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd239ff4-4555-406a-99ea-9cfd4720d836-kube-api-access-g4kcl" (OuterVolumeSpecName: "kube-api-access-g4kcl") pod "fd239ff4-4555-406a-99ea-9cfd4720d836" (UID: "fd239ff4-4555-406a-99ea-9cfd4720d836"). InnerVolumeSpecName "kube-api-access-g4kcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.559888 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd239ff4-4555-406a-99ea-9cfd4720d836" (UID: "fd239ff4-4555-406a-99ea-9cfd4720d836"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.576616 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.576660 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd239ff4-4555-406a-99ea-9cfd4720d836-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.576679 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4kcl\" (UniqueName: \"kubernetes.io/projected/fd239ff4-4555-406a-99ea-9cfd4720d836-kube-api-access-g4kcl\") on node \"crc\" DevicePath \"\"" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.662538 4861 generic.go:334] "Generic (PLEG): container finished" podID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerID="2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7" exitCode=0 Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.662601 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerDied","Data":"2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7"} Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.662653 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dgkn" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.662690 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dgkn" event={"ID":"fd239ff4-4555-406a-99ea-9cfd4720d836","Type":"ContainerDied","Data":"3bb90f205d454df9b61c04529bf92dae109509325f1d335d21a8ebe314954a6f"} Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.662723 4861 scope.go:117] "RemoveContainer" containerID="2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.666874 4861 generic.go:334] "Generic (PLEG): container finished" podID="1f14218c-279f-49dc-8759-9b538e679679" containerID="cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf" exitCode=0 Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.666926 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerDied","Data":"cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf"} Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.666966 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcn4" event={"ID":"1f14218c-279f-49dc-8759-9b538e679679","Type":"ContainerDied","Data":"6460a14e098be91bd97f4406e44c8556b94218f51ae940086cbd44ab96723e0e"} Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.667037 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qcn4" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.689902 4861 scope.go:117] "RemoveContainer" containerID="4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.738302 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dgkn"] Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.740370 4861 scope.go:117] "RemoveContainer" containerID="7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.745919 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dgkn"] Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.754052 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcn4"] Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.761827 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcn4"] Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.763629 4861 scope.go:117] "RemoveContainer" containerID="2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7" Feb 19 14:09:08 crc kubenswrapper[4861]: E0219 14:09:08.764155 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7\": container with ID starting with 2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7 not found: ID does not exist" containerID="2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.764200 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7"} err="failed to get container status \"2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7\": rpc error: code = NotFound desc = could not find container \"2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7\": container with ID starting with 2e056ff57dad171c5777ba83d57db53d41ac60da636c27a75046dddf540ce7b7 not found: ID does not exist" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.764230 4861 scope.go:117] "RemoveContainer" containerID="4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76" Feb 19 14:09:08 crc kubenswrapper[4861]: E0219 14:09:08.764575 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76\": container with ID starting with 4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76 not found: ID does not exist" containerID="4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.764618 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76"} err="failed to get container status \"4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76\": rpc error: code = NotFound desc = could not find container \"4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76\": container with ID starting with 4cfde532b6be6c00a97784769a960f804f5f676bf20d0992a7583565d0b77a76 not found: ID does not exist" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.764659 4861 scope.go:117] "RemoveContainer" containerID="7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99" Feb 19 14:09:08 crc kubenswrapper[4861]: E0219 14:09:08.764962 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99\": container with ID starting with 7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99 not found: ID does not exist" containerID="7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.764995 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99"} err="failed to get container status \"7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99\": rpc error: code = NotFound desc = could not find container \"7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99\": container with ID starting with 7cb45334575c429441d77dd0c326dc522e640433c95b0a5082b20bae15eb3d99 not found: ID does not exist" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.765015 4861 scope.go:117] "RemoveContainer" containerID="cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.820434 4861 scope.go:117] "RemoveContainer" containerID="8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.843498 4861 scope.go:117] "RemoveContainer" containerID="2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.874359 4861 scope.go:117] "RemoveContainer" containerID="cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf" Feb 19 14:09:08 crc kubenswrapper[4861]: E0219 14:09:08.875096 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf\": container with ID starting with cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf not found: ID does not exist" containerID="cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.875157 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf"} err="failed to get container status \"cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf\": rpc error: code = NotFound desc = could not find container \"cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf\": container with ID starting with cba4b328c9312dc60a370adbf1345f5e062b332f7fb8a796a8d6b97699dbabdf not found: ID does not exist" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.875196 4861 scope.go:117] "RemoveContainer" containerID="8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de" Feb 19 14:09:08 crc kubenswrapper[4861]: E0219 14:09:08.875685 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de\": container with ID starting with 8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de not found: ID does not exist" containerID="8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.875709 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de"} err="failed to get container status \"8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de\": rpc error: code = NotFound desc = could not find container \"8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de\": container with ID starting with 8120bd4aa33797593c8ff77ecf17fa45b3e7eeabcadf8d01755a1938f46331de not found: ID does not exist" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.875726 4861 scope.go:117] "RemoveContainer" containerID="2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e" Feb 19 14:09:08 crc kubenswrapper[4861]: E0219 14:09:08.877532 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e\": container with ID starting with 2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e not found: ID does not exist" containerID="2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e" Feb 19 14:09:08 crc kubenswrapper[4861]: I0219 14:09:08.877558 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e"} err="failed to get container status \"2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e\": rpc error: code = NotFound desc = could not find container \"2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e\": container with ID starting with 2c51e1d94ffcaff53ce49628d2dc86d76781f7027c9d6209808d59f0b808041e not found: ID does not exist" Feb 19 14:09:09 crc kubenswrapper[4861]: I0219 14:09:09.994515 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f14218c-279f-49dc-8759-9b538e679679" path="/var/lib/kubelet/pods/1f14218c-279f-49dc-8759-9b538e679679/volumes" Feb 19 14:09:09 crc kubenswrapper[4861]: I0219 14:09:09.995960 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" path="/var/lib/kubelet/pods/fd239ff4-4555-406a-99ea-9cfd4720d836/volumes" Feb 19 14:09:33 crc kubenswrapper[4861]: I0219 14:09:33.835698 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:09:33 crc kubenswrapper[4861]: I0219 14:09:33.836592 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:10:03 crc kubenswrapper[4861]: I0219 14:10:03.834740 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:10:03 crc kubenswrapper[4861]: I0219 14:10:03.836832 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:10:33 crc kubenswrapper[4861]: I0219 14:10:33.834648 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:10:33 crc kubenswrapper[4861]: I0219 14:10:33.835269 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:10:33 crc kubenswrapper[4861]: I0219 14:10:33.835353 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:10:33 crc kubenswrapper[4861]: I0219 14:10:33.836372 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b618bc46fa12701f1809fdf82a23eedbd2deae191f3a45e0501eba1fbb1d21f8"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:10:33 crc kubenswrapper[4861]: I0219 14:10:33.836512 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://b618bc46fa12701f1809fdf82a23eedbd2deae191f3a45e0501eba1fbb1d21f8" gracePeriod=600 Feb 19 14:10:34 crc kubenswrapper[4861]: I0219 14:10:34.497505 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="b618bc46fa12701f1809fdf82a23eedbd2deae191f3a45e0501eba1fbb1d21f8" exitCode=0 Feb 19 14:10:34 crc kubenswrapper[4861]: I0219 14:10:34.497708 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"b618bc46fa12701f1809fdf82a23eedbd2deae191f3a45e0501eba1fbb1d21f8"} Feb 19 14:10:34 crc kubenswrapper[4861]: I0219 14:10:34.497915 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67"} Feb 19 14:10:34 crc kubenswrapper[4861]: I0219 14:10:34.497941 4861 scope.go:117] "RemoveContainer" containerID="97b856abb408928b70bba31b7c25f45b803def7d251c0abdbe927a71082279fd" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.094218 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xnxn"] Feb 19 14:12:36 crc kubenswrapper[4861]: E0219 14:12:36.095289 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="extract-utilities" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095310 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="extract-utilities" Feb 19 14:12:36 crc kubenswrapper[4861]: E0219 14:12:36.095329 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="registry-server" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095344 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="registry-server" Feb 19 14:12:36 crc kubenswrapper[4861]: E0219 14:12:36.095361 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="extract-utilities" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095373 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="extract-utilities" Feb 19 14:12:36 crc kubenswrapper[4861]: E0219 14:12:36.095396 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="extract-content" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095407 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="extract-content" Feb 19 14:12:36 crc kubenswrapper[4861]: E0219 14:12:36.095463 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="registry-server" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095476 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="registry-server" Feb 19 14:12:36 crc kubenswrapper[4861]: E0219 14:12:36.095495 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="extract-content" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095506 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="extract-content" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095747 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd239ff4-4555-406a-99ea-9cfd4720d836" containerName="registry-server" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.095772 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f14218c-279f-49dc-8759-9b538e679679" containerName="registry-server" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.097410 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.111512 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xnxn"] Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.244807 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-utilities\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.244876 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xvp\" (UniqueName: \"kubernetes.io/projected/73daf0e3-448f-4d17-a249-fb300e0a163e-kube-api-access-b2xvp\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.244937 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-catalog-content\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.346080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-catalog-content\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.346522 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-utilities\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.346675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xvp\" (UniqueName: \"kubernetes.io/projected/73daf0e3-448f-4d17-a249-fb300e0a163e-kube-api-access-b2xvp\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.347154 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-catalog-content\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.347186 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-utilities\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.378725 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xvp\" (UniqueName: \"kubernetes.io/projected/73daf0e3-448f-4d17-a249-fb300e0a163e-kube-api-access-b2xvp\") pod \"certified-operators-6xnxn\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.430782 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:36 crc kubenswrapper[4861]: I0219 14:12:36.989714 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xnxn"] Feb 19 14:12:37 crc kubenswrapper[4861]: I0219 14:12:37.952337 4861 generic.go:334] "Generic (PLEG): container finished" podID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerID="92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8" exitCode=0 Feb 19 14:12:37 crc kubenswrapper[4861]: I0219 14:12:37.952495 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnxn" event={"ID":"73daf0e3-448f-4d17-a249-fb300e0a163e","Type":"ContainerDied","Data":"92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8"} Feb 19 14:12:37 crc kubenswrapper[4861]: I0219 14:12:37.952766 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnxn" event={"ID":"73daf0e3-448f-4d17-a249-fb300e0a163e","Type":"ContainerStarted","Data":"7418ccdc161bff586173693d2d7d733eb50de8ced55dcd91b69c53c6a047c2d8"} Feb 19 14:12:37 crc kubenswrapper[4861]: I0219 14:12:37.955300 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:12:39 crc kubenswrapper[4861]: I0219 14:12:39.971648 4861 generic.go:334] "Generic (PLEG): container finished" podID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerID="c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656" exitCode=0 Feb 19 14:12:39 crc kubenswrapper[4861]: I0219 14:12:39.971733 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnxn" event={"ID":"73daf0e3-448f-4d17-a249-fb300e0a163e","Type":"ContainerDied","Data":"c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656"} Feb 19 14:12:40 crc kubenswrapper[4861]: I0219 14:12:40.988130 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnxn" event={"ID":"73daf0e3-448f-4d17-a249-fb300e0a163e","Type":"ContainerStarted","Data":"4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c"} Feb 19 14:12:41 crc kubenswrapper[4861]: I0219 14:12:41.016706 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xnxn" podStartSLOduration=2.610267518 podStartE2EDuration="5.016684453s" podCreationTimestamp="2026-02-19 14:12:36 +0000 UTC" firstStartedPulling="2026-02-19 14:12:37.954892567 +0000 UTC m=+3772.615995835" lastFinishedPulling="2026-02-19 14:12:40.361309512 +0000 UTC m=+3775.022412770" observedRunningTime="2026-02-19 14:12:41.012343126 +0000 UTC m=+3775.673446394" watchObservedRunningTime="2026-02-19 14:12:41.016684453 +0000 UTC m=+3775.677787701" Feb 19 14:12:46 crc kubenswrapper[4861]: I0219 14:12:46.431231 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:46 crc kubenswrapper[4861]: I0219 14:12:46.431869 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:46 crc kubenswrapper[4861]: I0219 14:12:46.514832 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:47 crc kubenswrapper[4861]: I0219 14:12:47.124645 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:47 crc kubenswrapper[4861]: I0219 14:12:47.183645 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xnxn"] Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.056700 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xnxn" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="registry-server" containerID="cri-o://4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c" gracePeriod=2 Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.538541 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.681447 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-utilities\") pod \"73daf0e3-448f-4d17-a249-fb300e0a163e\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.681653 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-catalog-content\") pod \"73daf0e3-448f-4d17-a249-fb300e0a163e\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.681721 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2xvp\" (UniqueName: \"kubernetes.io/projected/73daf0e3-448f-4d17-a249-fb300e0a163e-kube-api-access-b2xvp\") pod \"73daf0e3-448f-4d17-a249-fb300e0a163e\" (UID: \"73daf0e3-448f-4d17-a249-fb300e0a163e\") " Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.683088 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-utilities" (OuterVolumeSpecName: "utilities") pod "73daf0e3-448f-4d17-a249-fb300e0a163e" (UID: "73daf0e3-448f-4d17-a249-fb300e0a163e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.689056 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73daf0e3-448f-4d17-a249-fb300e0a163e-kube-api-access-b2xvp" (OuterVolumeSpecName: "kube-api-access-b2xvp") pod "73daf0e3-448f-4d17-a249-fb300e0a163e" (UID: "73daf0e3-448f-4d17-a249-fb300e0a163e"). InnerVolumeSpecName "kube-api-access-b2xvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.783587 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2xvp\" (UniqueName: \"kubernetes.io/projected/73daf0e3-448f-4d17-a249-fb300e0a163e-kube-api-access-b2xvp\") on node \"crc\" DevicePath \"\"" Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.783694 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.976770 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73daf0e3-448f-4d17-a249-fb300e0a163e" (UID: "73daf0e3-448f-4d17-a249-fb300e0a163e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:12:49 crc kubenswrapper[4861]: I0219 14:12:49.987010 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73daf0e3-448f-4d17-a249-fb300e0a163e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.070664 4861 generic.go:334] "Generic (PLEG): container finished" podID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerID="4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c" exitCode=0 Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.070727 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnxn" event={"ID":"73daf0e3-448f-4d17-a249-fb300e0a163e","Type":"ContainerDied","Data":"4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c"} Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.070776 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnxn" event={"ID":"73daf0e3-448f-4d17-a249-fb300e0a163e","Type":"ContainerDied","Data":"7418ccdc161bff586173693d2d7d733eb50de8ced55dcd91b69c53c6a047c2d8"} Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.070806 4861 scope.go:117] "RemoveContainer" containerID="4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.072625 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnxn" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.105583 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xnxn"] Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.113530 4861 scope.go:117] "RemoveContainer" containerID="c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.133909 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xnxn"] Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.140022 4861 scope.go:117] "RemoveContainer" containerID="92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.176782 4861 scope.go:117] "RemoveContainer" containerID="4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c" Feb 19 14:12:50 crc kubenswrapper[4861]: E0219 14:12:50.177417 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c\": container with ID starting with 4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c not found: ID does not exist" containerID="4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.177479 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c"} err="failed to get container status \"4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c\": rpc error: code = NotFound desc = could not find container \"4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c\": container with ID starting with 4a9be13bce56763ea42a5d0e4300187d7199e7d3d3df70385ab621a438a6f79c not found: ID does not exist" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.177515 4861 scope.go:117] "RemoveContainer" containerID="c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656" Feb 19 14:12:50 crc kubenswrapper[4861]: E0219 14:12:50.177869 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656\": container with ID starting with c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656 not found: ID does not exist" containerID="c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.177889 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656"} err="failed to get container status \"c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656\": rpc error: code = NotFound desc = could not find container \"c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656\": container with ID starting with c8ad24d4f4e395b4d690bd0756e6136eae6d2726da03172e0a043d025bf2e656 not found: ID does not exist" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.177905 4861 scope.go:117] "RemoveContainer" containerID="92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8" Feb 19 14:12:50 crc kubenswrapper[4861]: E0219 14:12:50.178223 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8\": container with ID starting with 92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8 not found: ID does not exist" containerID="92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8" Feb 19 14:12:50 crc kubenswrapper[4861]: I0219 14:12:50.178248 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8"} err="failed to get container status \"92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8\": rpc error: code = NotFound desc = could not find container \"92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8\": container with ID starting with 92a11c7f78d089d1c68bf629bd2c72de4c577dbb095185580e35aa5be7c983b8 not found: ID does not exist" Feb 19 14:12:51 crc kubenswrapper[4861]: I0219 14:12:51.989561 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" path="/var/lib/kubelet/pods/73daf0e3-448f-4d17-a249-fb300e0a163e/volumes" Feb 19 14:13:03 crc kubenswrapper[4861]: I0219 14:13:03.834225 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:13:03 crc kubenswrapper[4861]: I0219 14:13:03.834798 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:13:33 crc kubenswrapper[4861]: I0219 14:13:33.835037 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:13:33 crc kubenswrapper[4861]: I0219 14:13:33.835653 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:14:03 crc kubenswrapper[4861]: I0219 14:14:03.835214 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:14:03 crc kubenswrapper[4861]: I0219 14:14:03.836268 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:14:03 crc kubenswrapper[4861]: I0219 14:14:03.836359 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:14:03 crc kubenswrapper[4861]: I0219 14:14:03.837401 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:14:03 crc kubenswrapper[4861]: I0219 14:14:03.837567 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" gracePeriod=600 Feb 19 14:14:03 crc kubenswrapper[4861]: E0219 14:14:03.965987 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:14:04 crc kubenswrapper[4861]: I0219 14:14:04.738078 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" exitCode=0 Feb 19 14:14:04 crc kubenswrapper[4861]: I0219 14:14:04.738125 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67"} Feb 19 14:14:04 crc kubenswrapper[4861]: I0219 14:14:04.738155 4861 scope.go:117] "RemoveContainer" containerID="b618bc46fa12701f1809fdf82a23eedbd2deae191f3a45e0501eba1fbb1d21f8" Feb 19 14:14:04 crc kubenswrapper[4861]: I0219 14:14:04.738608 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:14:04 crc kubenswrapper[4861]: E0219 14:14:04.738853 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:14:17 crc kubenswrapper[4861]: I0219 14:14:17.978160 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:14:17 crc kubenswrapper[4861]: E0219 14:14:17.979372 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:14:30 crc kubenswrapper[4861]: I0219 14:14:30.977132 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:14:30 crc kubenswrapper[4861]: E0219 14:14:30.978066 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:14:43 crc kubenswrapper[4861]: I0219 14:14:43.978076 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:14:43 crc kubenswrapper[4861]: E0219 14:14:43.979203 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:14:58 crc kubenswrapper[4861]: I0219 14:14:58.977577 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:14:58 crc kubenswrapper[4861]: E0219 14:14:58.978662 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.214543 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn"] Feb 19 14:15:00 crc kubenswrapper[4861]: E0219 14:15:00.215046 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="extract-content" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.215071 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="extract-content" Feb 19 14:15:00 crc kubenswrapper[4861]: E0219 14:15:00.215106 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="extract-utilities" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.215119 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="extract-utilities" Feb 19 14:15:00 crc kubenswrapper[4861]: E0219 14:15:00.215135 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="registry-server" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.215167 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="registry-server" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.215473 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="73daf0e3-448f-4d17-a249-fb300e0a163e" containerName="registry-server" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.216160 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.218885 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.219294 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.228405 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn"] Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.307685 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnfn\" (UniqueName: \"kubernetes.io/projected/8bcf75d3-3362-4134-9bc0-89fc873610a5-kube-api-access-8bnfn\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.307849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bcf75d3-3362-4134-9bc0-89fc873610a5-secret-volume\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.307926 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bcf75d3-3362-4134-9bc0-89fc873610a5-config-volume\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.409509 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bcf75d3-3362-4134-9bc0-89fc873610a5-config-volume\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.409615 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnfn\" (UniqueName: \"kubernetes.io/projected/8bcf75d3-3362-4134-9bc0-89fc873610a5-kube-api-access-8bnfn\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.409692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bcf75d3-3362-4134-9bc0-89fc873610a5-secret-volume\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.411241 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bcf75d3-3362-4134-9bc0-89fc873610a5-config-volume\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.420672 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bcf75d3-3362-4134-9bc0-89fc873610a5-secret-volume\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.430744 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnfn\" (UniqueName: \"kubernetes.io/projected/8bcf75d3-3362-4134-9bc0-89fc873610a5-kube-api-access-8bnfn\") pod \"collect-profiles-29525175-bg4xn\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:00 crc kubenswrapper[4861]: I0219 14:15:00.554892 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:01 crc kubenswrapper[4861]: I0219 14:15:01.015779 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn"] Feb 19 14:15:01 crc kubenswrapper[4861]: W0219 14:15:01.022657 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bcf75d3_3362_4134_9bc0_89fc873610a5.slice/crio-b1e83c6709f8de8ff7f11b6fdb9df4085c3239f599c83a4ce5184d5d75cad9a4 WatchSource:0}: Error finding container b1e83c6709f8de8ff7f11b6fdb9df4085c3239f599c83a4ce5184d5d75cad9a4: Status 404 returned error can't find the container with id b1e83c6709f8de8ff7f11b6fdb9df4085c3239f599c83a4ce5184d5d75cad9a4 Feb 19 14:15:01 crc kubenswrapper[4861]: I0219 14:15:01.937074 4861 generic.go:334] "Generic (PLEG): container finished" podID="8bcf75d3-3362-4134-9bc0-89fc873610a5" containerID="c421d19a9e489a719dd9f5df4197d90121a2f4d8d1fd3526599c0fcecef15254" exitCode=0 Feb 19 14:15:01 crc kubenswrapper[4861]: I0219 14:15:01.937145 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" event={"ID":"8bcf75d3-3362-4134-9bc0-89fc873610a5","Type":"ContainerDied","Data":"c421d19a9e489a719dd9f5df4197d90121a2f4d8d1fd3526599c0fcecef15254"} Feb 19 14:15:01 crc kubenswrapper[4861]: I0219 14:15:01.938602 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" event={"ID":"8bcf75d3-3362-4134-9bc0-89fc873610a5","Type":"ContainerStarted","Data":"b1e83c6709f8de8ff7f11b6fdb9df4085c3239f599c83a4ce5184d5d75cad9a4"} Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.230601 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.353533 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bcf75d3-3362-4134-9bc0-89fc873610a5-config-volume\") pod \"8bcf75d3-3362-4134-9bc0-89fc873610a5\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.353606 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bcf75d3-3362-4134-9bc0-89fc873610a5-secret-volume\") pod \"8bcf75d3-3362-4134-9bc0-89fc873610a5\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.353728 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bnfn\" (UniqueName: \"kubernetes.io/projected/8bcf75d3-3362-4134-9bc0-89fc873610a5-kube-api-access-8bnfn\") pod \"8bcf75d3-3362-4134-9bc0-89fc873610a5\" (UID: \"8bcf75d3-3362-4134-9bc0-89fc873610a5\") " Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.355137 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bcf75d3-3362-4134-9bc0-89fc873610a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "8bcf75d3-3362-4134-9bc0-89fc873610a5" (UID: "8bcf75d3-3362-4134-9bc0-89fc873610a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.360328 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcf75d3-3362-4134-9bc0-89fc873610a5-kube-api-access-8bnfn" (OuterVolumeSpecName: "kube-api-access-8bnfn") pod "8bcf75d3-3362-4134-9bc0-89fc873610a5" (UID: "8bcf75d3-3362-4134-9bc0-89fc873610a5"). InnerVolumeSpecName "kube-api-access-8bnfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.361496 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf75d3-3362-4134-9bc0-89fc873610a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8bcf75d3-3362-4134-9bc0-89fc873610a5" (UID: "8bcf75d3-3362-4134-9bc0-89fc873610a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.455362 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bnfn\" (UniqueName: \"kubernetes.io/projected/8bcf75d3-3362-4134-9bc0-89fc873610a5-kube-api-access-8bnfn\") on node \"crc\" DevicePath \"\"" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.455400 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bcf75d3-3362-4134-9bc0-89fc873610a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.455410 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bcf75d3-3362-4134-9bc0-89fc873610a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.958489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" event={"ID":"8bcf75d3-3362-4134-9bc0-89fc873610a5","Type":"ContainerDied","Data":"b1e83c6709f8de8ff7f11b6fdb9df4085c3239f599c83a4ce5184d5d75cad9a4"} Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.958926 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1e83c6709f8de8ff7f11b6fdb9df4085c3239f599c83a4ce5184d5d75cad9a4" Feb 19 14:15:03 crc kubenswrapper[4861]: I0219 14:15:03.958597 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn" Feb 19 14:15:04 crc kubenswrapper[4861]: I0219 14:15:04.306920 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2"] Feb 19 14:15:04 crc kubenswrapper[4861]: I0219 14:15:04.317081 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-2k7g2"] Feb 19 14:15:05 crc kubenswrapper[4861]: I0219 14:15:05.989581 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a66507-2e1b-457e-bdd2-64fa5283fea9" path="/var/lib/kubelet/pods/27a66507-2e1b-457e-bdd2-64fa5283fea9/volumes" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.250592 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2zl4"] Feb 19 14:15:10 crc kubenswrapper[4861]: E0219 14:15:10.251732 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcf75d3-3362-4134-9bc0-89fc873610a5" containerName="collect-profiles" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.251757 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcf75d3-3362-4134-9bc0-89fc873610a5" containerName="collect-profiles" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.252030 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcf75d3-3362-4134-9bc0-89fc873610a5" containerName="collect-profiles" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.253768 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.275302 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2zl4"] Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.364411 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vznsl\" (UniqueName: \"kubernetes.io/projected/4c43105a-60c6-4a71-8632-edff2f2a71de-kube-api-access-vznsl\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.364522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-utilities\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.364568 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-catalog-content\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.466204 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-utilities\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.466290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-catalog-content\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.466725 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-utilities\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.466799 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-catalog-content\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.466951 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vznsl\" (UniqueName: \"kubernetes.io/projected/4c43105a-60c6-4a71-8632-edff2f2a71de-kube-api-access-vznsl\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.490129 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vznsl\" (UniqueName: \"kubernetes.io/projected/4c43105a-60c6-4a71-8632-edff2f2a71de-kube-api-access-vznsl\") pod \"redhat-operators-s2zl4\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.625411 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:10 crc kubenswrapper[4861]: I0219 14:15:10.977228 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:15:10 crc kubenswrapper[4861]: E0219 14:15:10.977620 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:15:11 crc kubenswrapper[4861]: I0219 14:15:11.088594 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2zl4"] Feb 19 14:15:12 crc kubenswrapper[4861]: I0219 14:15:12.027322 4861 generic.go:334] "Generic (PLEG): container finished" podID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerID="eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a" exitCode=0 Feb 19 14:15:12 crc kubenswrapper[4861]: I0219 14:15:12.027614 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2zl4" event={"ID":"4c43105a-60c6-4a71-8632-edff2f2a71de","Type":"ContainerDied","Data":"eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a"} Feb 19 14:15:12 crc kubenswrapper[4861]: I0219 14:15:12.027638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2zl4" event={"ID":"4c43105a-60c6-4a71-8632-edff2f2a71de","Type":"ContainerStarted","Data":"f3fe68a1a093fa7b35b5076d02d48c370fa617681d259de04bc562e992b258a4"} Feb 19 14:15:14 crc kubenswrapper[4861]: I0219 14:15:14.051859 4861 generic.go:334] "Generic (PLEG): container finished" podID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerID="70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9" exitCode=0 Feb 19 14:15:14 crc kubenswrapper[4861]: I0219 14:15:14.052061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2zl4" event={"ID":"4c43105a-60c6-4a71-8632-edff2f2a71de","Type":"ContainerDied","Data":"70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9"} Feb 19 14:15:15 crc kubenswrapper[4861]: I0219 14:15:15.062040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2zl4" event={"ID":"4c43105a-60c6-4a71-8632-edff2f2a71de","Type":"ContainerStarted","Data":"4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7"} Feb 19 14:15:15 crc kubenswrapper[4861]: I0219 14:15:15.093292 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2zl4" podStartSLOduration=2.66063288 podStartE2EDuration="5.093270841s" podCreationTimestamp="2026-02-19 14:15:10 +0000 UTC" firstStartedPulling="2026-02-19 14:15:12.029612946 +0000 UTC m=+3926.690716174" lastFinishedPulling="2026-02-19 14:15:14.462250877 +0000 UTC m=+3929.123354135" observedRunningTime="2026-02-19 14:15:15.08508308 +0000 UTC m=+3929.746186308" watchObservedRunningTime="2026-02-19 14:15:15.093270841 +0000 UTC m=+3929.754374089" Feb 19 14:15:20 crc kubenswrapper[4861]: I0219 14:15:20.626140 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:20 crc kubenswrapper[4861]: I0219 14:15:20.626624 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:21 crc kubenswrapper[4861]: I0219 14:15:21.667460 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s2zl4" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="registry-server" probeResult="failure" output=< Feb 19 14:15:21 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:15:21 crc kubenswrapper[4861]: > Feb 19 14:15:22 crc kubenswrapper[4861]: I0219 14:15:22.977536 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:15:22 crc kubenswrapper[4861]: E0219 14:15:22.978047 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:15:30 crc kubenswrapper[4861]: I0219 14:15:30.691779 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:30 crc kubenswrapper[4861]: I0219 14:15:30.762679 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:30 crc kubenswrapper[4861]: I0219 14:15:30.939505 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2zl4"] Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.215524 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2zl4" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="registry-server" containerID="cri-o://4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7" gracePeriod=2 Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.627123 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.637037 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-catalog-content\") pod \"4c43105a-60c6-4a71-8632-edff2f2a71de\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.637123 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vznsl\" (UniqueName: \"kubernetes.io/projected/4c43105a-60c6-4a71-8632-edff2f2a71de-kube-api-access-vznsl\") pod \"4c43105a-60c6-4a71-8632-edff2f2a71de\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.637271 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-utilities\") pod \"4c43105a-60c6-4a71-8632-edff2f2a71de\" (UID: \"4c43105a-60c6-4a71-8632-edff2f2a71de\") " Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.638173 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-utilities" (OuterVolumeSpecName: "utilities") pod "4c43105a-60c6-4a71-8632-edff2f2a71de" (UID: "4c43105a-60c6-4a71-8632-edff2f2a71de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.643666 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c43105a-60c6-4a71-8632-edff2f2a71de-kube-api-access-vznsl" (OuterVolumeSpecName: "kube-api-access-vznsl") pod "4c43105a-60c6-4a71-8632-edff2f2a71de" (UID: "4c43105a-60c6-4a71-8632-edff2f2a71de"). InnerVolumeSpecName "kube-api-access-vznsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.738977 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vznsl\" (UniqueName: \"kubernetes.io/projected/4c43105a-60c6-4a71-8632-edff2f2a71de-kube-api-access-vznsl\") on node \"crc\" DevicePath \"\"" Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.739013 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.771441 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c43105a-60c6-4a71-8632-edff2f2a71de" (UID: "4c43105a-60c6-4a71-8632-edff2f2a71de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:15:32 crc kubenswrapper[4861]: I0219 14:15:32.840536 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c43105a-60c6-4a71-8632-edff2f2a71de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.228818 4861 generic.go:334] "Generic (PLEG): container finished" podID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerID="4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7" exitCode=0 Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.228879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2zl4" event={"ID":"4c43105a-60c6-4a71-8632-edff2f2a71de","Type":"ContainerDied","Data":"4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7"} Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.228922 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2zl4" event={"ID":"4c43105a-60c6-4a71-8632-edff2f2a71de","Type":"ContainerDied","Data":"f3fe68a1a093fa7b35b5076d02d48c370fa617681d259de04bc562e992b258a4"} Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.228953 4861 scope.go:117] "RemoveContainer" containerID="4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.229127 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2zl4" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.263672 4861 scope.go:117] "RemoveContainer" containerID="70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.292918 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2zl4"] Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.308080 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2zl4"] Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.311735 4861 scope.go:117] "RemoveContainer" containerID="eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.331849 4861 scope.go:117] "RemoveContainer" containerID="4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7" Feb 19 14:15:33 crc kubenswrapper[4861]: E0219 14:15:33.332541 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7\": container with ID starting with 4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7 not found: ID does not exist" containerID="4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.332595 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7"} err="failed to get container status \"4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7\": rpc error: code = NotFound desc = could not find container \"4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7\": container with ID starting with 4aa08c13fa3b1ef45d3fa7127050ca460c4941a58e5a67588e48c13ede4dd5e7 not found: ID does not exist" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.332625 4861 scope.go:117] "RemoveContainer" containerID="70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9" Feb 19 14:15:33 crc kubenswrapper[4861]: E0219 14:15:33.332991 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9\": container with ID starting with 70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9 not found: ID does not exist" containerID="70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.333050 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9"} err="failed to get container status \"70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9\": rpc error: code = NotFound desc = could not find container \"70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9\": container with ID starting with 70b91391afbd3b1e0c415614bb358e2cb535ed160ceb7bc21db2d2c4def776c9 not found: ID does not exist" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.333098 4861 scope.go:117] "RemoveContainer" containerID="eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a" Feb 19 14:15:33 crc kubenswrapper[4861]: E0219 14:15:33.333400 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a\": container with ID starting with eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a not found: ID does not exist" containerID="eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.333469 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a"} err="failed to get container status \"eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a\": rpc error: code = NotFound desc = could not find container \"eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a\": container with ID starting with eff0cd659c9d6815b4f37b7fb3f0af4e1393ac6125a2e60d72528cdb49a6016a not found: ID does not exist" Feb 19 14:15:33 crc kubenswrapper[4861]: I0219 14:15:33.990554 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" path="/var/lib/kubelet/pods/4c43105a-60c6-4a71-8632-edff2f2a71de/volumes" Feb 19 14:15:36 crc kubenswrapper[4861]: I0219 14:15:36.977291 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:15:36 crc kubenswrapper[4861]: E0219 14:15:36.978369 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:15:48 crc kubenswrapper[4861]: I0219 14:15:48.976537 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:15:48 crc kubenswrapper[4861]: E0219 14:15:48.977279 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:16:01 crc kubenswrapper[4861]: I0219 14:16:01.510983 4861 scope.go:117] "RemoveContainer" containerID="e97b0209ee18d6842151927b843f53ff2ef0e5d973fad92a6dee7be3cc1d3157" Feb 19 14:16:02 crc kubenswrapper[4861]: I0219 14:16:02.977158 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:16:02 crc kubenswrapper[4861]: E0219 14:16:02.978126 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:16:14 crc kubenswrapper[4861]: I0219 14:16:14.977259 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:16:14 crc kubenswrapper[4861]: E0219 14:16:14.978642 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:16:26 crc kubenswrapper[4861]: I0219 14:16:26.977833 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:16:26 crc kubenswrapper[4861]: E0219 14:16:26.978761 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:16:37 crc kubenswrapper[4861]: I0219 14:16:37.977715 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:16:37 crc kubenswrapper[4861]: E0219 14:16:37.978969 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:16:50 crc kubenswrapper[4861]: I0219 14:16:50.976854 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:16:50 crc kubenswrapper[4861]: E0219 14:16:50.977983 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:17:02 crc kubenswrapper[4861]: I0219 14:17:02.978181 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:17:02 crc kubenswrapper[4861]: E0219 14:17:02.979374 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:17:15 crc kubenswrapper[4861]: I0219 14:17:15.988294 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:17:15 crc kubenswrapper[4861]: E0219 14:17:15.989313 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:17:30 crc kubenswrapper[4861]: I0219 14:17:30.977526 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:17:30 crc kubenswrapper[4861]: E0219 14:17:30.978455 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:17:43 crc kubenswrapper[4861]: I0219 14:17:43.977398 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:17:43 crc kubenswrapper[4861]: E0219 14:17:43.978159 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:17:56 crc kubenswrapper[4861]: I0219 14:17:56.978102 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:17:56 crc kubenswrapper[4861]: E0219 14:17:56.979807 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:18:09 crc kubenswrapper[4861]: I0219 14:18:09.977553 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:18:09 crc kubenswrapper[4861]: E0219 14:18:09.978490 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:18:20 crc kubenswrapper[4861]: I0219 14:18:20.977731 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:18:20 crc kubenswrapper[4861]: E0219 14:18:20.978873 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:18:35 crc kubenswrapper[4861]: I0219 14:18:35.983779 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:18:35 crc kubenswrapper[4861]: E0219 14:18:35.984990 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:18:47 crc kubenswrapper[4861]: I0219 14:18:47.978052 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:18:47 crc kubenswrapper[4861]: E0219 14:18:47.979337 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:18:58 crc kubenswrapper[4861]: I0219 14:18:58.977635 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:18:58 crc kubenswrapper[4861]: E0219 14:18:58.978657 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:19:12 crc kubenswrapper[4861]: I0219 14:19:12.977862 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:19:13 crc kubenswrapper[4861]: I0219 14:19:13.205717 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"b3192bd7c3c9bca9390d78d1b46500dbfdd985426c314e19d884c49319fcd666"} Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.973777 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7spnj"] Feb 19 14:19:36 crc kubenswrapper[4861]: E0219 14:19:36.974892 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="extract-content" Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.974915 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="extract-content" Feb 19 14:19:36 crc kubenswrapper[4861]: E0219 14:19:36.974936 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="extract-utilities" Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.974954 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="extract-utilities" Feb 19 14:19:36 crc kubenswrapper[4861]: E0219 14:19:36.974977 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="registry-server" Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.974991 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="registry-server" Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.975356 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c43105a-60c6-4a71-8632-edff2f2a71de" containerName="registry-server" Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.977076 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:36 crc kubenswrapper[4861]: I0219 14:19:36.993925 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7spnj"] Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.129681 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-utilities\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.129744 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-catalog-content\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.130570 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55kh\" (UniqueName: \"kubernetes.io/projected/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-kube-api-access-z55kh\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.231663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55kh\" (UniqueName: \"kubernetes.io/projected/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-kube-api-access-z55kh\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.231778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-utilities\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.231829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-catalog-content\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.232352 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-utilities\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.232654 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-catalog-content\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.278179 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55kh\" (UniqueName: \"kubernetes.io/projected/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-kube-api-access-z55kh\") pod \"redhat-marketplace-7spnj\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.311490 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:37 crc kubenswrapper[4861]: I0219 14:19:37.867767 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7spnj"] Feb 19 14:19:38 crc kubenswrapper[4861]: I0219 14:19:38.414873 4861 generic.go:334] "Generic (PLEG): container finished" podID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerID="d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18" exitCode=0 Feb 19 14:19:38 crc kubenswrapper[4861]: I0219 14:19:38.415224 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7spnj" event={"ID":"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc","Type":"ContainerDied","Data":"d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18"} Feb 19 14:19:38 crc kubenswrapper[4861]: I0219 14:19:38.415263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7spnj" event={"ID":"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc","Type":"ContainerStarted","Data":"774a12ec91e022bb69d4c0e8ccb58b6a55fcb929b46850c8ef62c7b91e540a62"} Feb 19 14:19:38 crc kubenswrapper[4861]: I0219 14:19:38.417710 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:19:40 crc kubenswrapper[4861]: I0219 14:19:40.434673 4861 generic.go:334] "Generic (PLEG): container finished" podID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerID="277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5" exitCode=0 Feb 19 14:19:40 crc kubenswrapper[4861]: I0219 14:19:40.434802 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7spnj" event={"ID":"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc","Type":"ContainerDied","Data":"277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5"} Feb 19 14:19:41 crc kubenswrapper[4861]: I0219 14:19:41.447892 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7spnj" event={"ID":"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc","Type":"ContainerStarted","Data":"945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c"} Feb 19 14:19:41 crc kubenswrapper[4861]: I0219 14:19:41.481255 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7spnj" podStartSLOduration=3.05105361 podStartE2EDuration="5.481234547s" podCreationTimestamp="2026-02-19 14:19:36 +0000 UTC" firstStartedPulling="2026-02-19 14:19:38.417305612 +0000 UTC m=+4193.078408880" lastFinishedPulling="2026-02-19 14:19:40.847486579 +0000 UTC m=+4195.508589817" observedRunningTime="2026-02-19 14:19:41.478729179 +0000 UTC m=+4196.139832437" watchObservedRunningTime="2026-02-19 14:19:41.481234547 +0000 UTC m=+4196.142337805" Feb 19 14:19:47 crc kubenswrapper[4861]: I0219 14:19:47.311950 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:47 crc kubenswrapper[4861]: I0219 14:19:47.312321 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:47 crc kubenswrapper[4861]: I0219 14:19:47.388546 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:47 crc kubenswrapper[4861]: I0219 14:19:47.554343 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:47 crc kubenswrapper[4861]: I0219 14:19:47.635190 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7spnj"] Feb 19 14:19:49 crc kubenswrapper[4861]: I0219 14:19:49.530349 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7spnj" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="registry-server" containerID="cri-o://945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c" gracePeriod=2 Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.046102 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.238944 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-utilities\") pod \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.239103 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-catalog-content\") pod \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.239161 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z55kh\" (UniqueName: \"kubernetes.io/projected/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-kube-api-access-z55kh\") pod \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\" (UID: \"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc\") " Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.240583 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-utilities" (OuterVolumeSpecName: "utilities") pod "dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" (UID: "dcc9dc2f-7d88-440d-9b2f-1ad38690cddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.247314 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-kube-api-access-z55kh" (OuterVolumeSpecName: "kube-api-access-z55kh") pod "dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" (UID: "dcc9dc2f-7d88-440d-9b2f-1ad38690cddc"). InnerVolumeSpecName "kube-api-access-z55kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.291231 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" (UID: "dcc9dc2f-7d88-440d-9b2f-1ad38690cddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.340908 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z55kh\" (UniqueName: \"kubernetes.io/projected/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-kube-api-access-z55kh\") on node \"crc\" DevicePath \"\"" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.340960 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.340982 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.544956 4861 generic.go:334] "Generic (PLEG): container finished" podID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerID="945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c" exitCode=0 Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.545013 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7spnj" event={"ID":"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc","Type":"ContainerDied","Data":"945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c"} Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.545048 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7spnj" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.545081 4861 scope.go:117] "RemoveContainer" containerID="945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.545063 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7spnj" event={"ID":"dcc9dc2f-7d88-440d-9b2f-1ad38690cddc","Type":"ContainerDied","Data":"774a12ec91e022bb69d4c0e8ccb58b6a55fcb929b46850c8ef62c7b91e540a62"} Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.598723 4861 scope.go:117] "RemoveContainer" containerID="277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5" Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.613639 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7spnj"] Feb 19 14:19:50 crc kubenswrapper[4861]: I0219 14:19:50.623532 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7spnj"] Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.015720 4861 scope.go:117] "RemoveContainer" containerID="d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.089732 4861 scope.go:117] "RemoveContainer" containerID="945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c" Feb 19 14:19:51 crc kubenswrapper[4861]: E0219 14:19:51.090603 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c\": container with ID starting with 945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c not found: ID does not exist" containerID="945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.090678 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c"} err="failed to get container status \"945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c\": rpc error: code = NotFound desc = could not find container \"945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c\": container with ID starting with 945403a07b1dacf82db9809b8de8c337def00eb97505b16546a142e4a08db75c not found: ID does not exist" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.090724 4861 scope.go:117] "RemoveContainer" containerID="277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5" Feb 19 14:19:51 crc kubenswrapper[4861]: E0219 14:19:51.091240 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5\": container with ID starting with 277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5 not found: ID does not exist" containerID="277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.091307 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5"} err="failed to get container status \"277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5\": rpc error: code = NotFound desc = could not find container \"277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5\": container with ID starting with 277078d9bdf788a08d56314e60fa3a54739720a5d22672db7403e4678a582df5 not found: ID does not exist" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.091353 4861 scope.go:117] "RemoveContainer" containerID="d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18" Feb 19 14:19:51 crc kubenswrapper[4861]: E0219 14:19:51.091876 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18\": container with ID starting with d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18 not found: ID does not exist" containerID="d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.091915 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18"} err="failed to get container status \"d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18\": rpc error: code = NotFound desc = could not find container \"d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18\": container with ID starting with d3d37213f7929f0a86cb210c98461c1216812d4e5a135580d34fdb2120350b18 not found: ID does not exist" Feb 19 14:19:51 crc kubenswrapper[4861]: I0219 14:19:51.989605 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" path="/var/lib/kubelet/pods/dcc9dc2f-7d88-440d-9b2f-1ad38690cddc/volumes" Feb 19 14:21:33 crc kubenswrapper[4861]: I0219 14:21:33.839008 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:21:33 crc kubenswrapper[4861]: I0219 14:21:33.840061 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:22:03 crc kubenswrapper[4861]: I0219 14:22:03.834493 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:22:03 crc kubenswrapper[4861]: I0219 14:22:03.835226 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.650757 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8nfvl"] Feb 19 14:22:20 crc kubenswrapper[4861]: E0219 14:22:20.651818 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="extract-utilities" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.651843 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="extract-utilities" Feb 19 14:22:20 crc kubenswrapper[4861]: E0219 14:22:20.651866 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="registry-server" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.651879 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="registry-server" Feb 19 14:22:20 crc kubenswrapper[4861]: E0219 14:22:20.651903 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="extract-content" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.651915 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="extract-content" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.652157 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc9dc2f-7d88-440d-9b2f-1ad38690cddc" containerName="registry-server" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.653791 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.663344 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nfvl"] Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.708805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc5jw\" (UniqueName: \"kubernetes.io/projected/6b154685-46b1-4748-a1b1-a49fbabaa59e-kube-api-access-cc5jw\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.709026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-catalog-content\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.709165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-utilities\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.810708 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-utilities\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.810804 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc5jw\" (UniqueName: \"kubernetes.io/projected/6b154685-46b1-4748-a1b1-a49fbabaa59e-kube-api-access-cc5jw\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.810844 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-catalog-content\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.811488 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-catalog-content\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.811689 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-utilities\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.838779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc5jw\" (UniqueName: \"kubernetes.io/projected/6b154685-46b1-4748-a1b1-a49fbabaa59e-kube-api-access-cc5jw\") pod \"community-operators-8nfvl\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:20 crc kubenswrapper[4861]: I0219 14:22:20.991947 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:21 crc kubenswrapper[4861]: I0219 14:22:21.476062 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nfvl"] Feb 19 14:22:21 crc kubenswrapper[4861]: W0219 14:22:21.485295 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b154685_46b1_4748_a1b1_a49fbabaa59e.slice/crio-06e3b69eb41414b882150d25216aa3d8e9ad58ed5c557f65db9dcc142289d4e4 WatchSource:0}: Error finding container 06e3b69eb41414b882150d25216aa3d8e9ad58ed5c557f65db9dcc142289d4e4: Status 404 returned error can't find the container with id 06e3b69eb41414b882150d25216aa3d8e9ad58ed5c557f65db9dcc142289d4e4 Feb 19 14:22:21 crc kubenswrapper[4861]: I0219 14:22:21.932345 4861 generic.go:334] "Generic (PLEG): container finished" podID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerID="650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f" exitCode=0 Feb 19 14:22:21 crc kubenswrapper[4861]: I0219 14:22:21.932454 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerDied","Data":"650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f"} Feb 19 14:22:21 crc kubenswrapper[4861]: I0219 14:22:21.932894 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerStarted","Data":"06e3b69eb41414b882150d25216aa3d8e9ad58ed5c557f65db9dcc142289d4e4"} Feb 19 14:22:22 crc kubenswrapper[4861]: I0219 14:22:22.945903 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerStarted","Data":"562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f"} Feb 19 14:22:23 crc kubenswrapper[4861]: I0219 14:22:23.959939 4861 generic.go:334] "Generic (PLEG): container finished" podID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerID="562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f" exitCode=0 Feb 19 14:22:23 crc kubenswrapper[4861]: I0219 14:22:23.960005 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerDied","Data":"562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f"} Feb 19 14:22:24 crc kubenswrapper[4861]: I0219 14:22:24.971938 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerStarted","Data":"b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460"} Feb 19 14:22:25 crc kubenswrapper[4861]: I0219 14:22:25.000583 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8nfvl" podStartSLOduration=2.546183322 podStartE2EDuration="5.000561094s" podCreationTimestamp="2026-02-19 14:22:20 +0000 UTC" firstStartedPulling="2026-02-19 14:22:21.934252344 +0000 UTC m=+4356.595355612" lastFinishedPulling="2026-02-19 14:22:24.388630116 +0000 UTC m=+4359.049733384" observedRunningTime="2026-02-19 14:22:24.999178587 +0000 UTC m=+4359.660281845" watchObservedRunningTime="2026-02-19 14:22:25.000561094 +0000 UTC m=+4359.661664332" Feb 19 14:22:30 crc kubenswrapper[4861]: I0219 14:22:30.992554 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:30 crc kubenswrapper[4861]: I0219 14:22:30.993263 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:31 crc kubenswrapper[4861]: I0219 14:22:31.095852 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:31 crc kubenswrapper[4861]: I0219 14:22:31.151764 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:31 crc kubenswrapper[4861]: I0219 14:22:31.342146 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8nfvl"] Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.102691 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8nfvl" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="registry-server" containerID="cri-o://b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460" gracePeriod=2 Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.517988 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.630702 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-catalog-content\") pod \"6b154685-46b1-4748-a1b1-a49fbabaa59e\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.630971 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc5jw\" (UniqueName: \"kubernetes.io/projected/6b154685-46b1-4748-a1b1-a49fbabaa59e-kube-api-access-cc5jw\") pod \"6b154685-46b1-4748-a1b1-a49fbabaa59e\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.631076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-utilities\") pod \"6b154685-46b1-4748-a1b1-a49fbabaa59e\" (UID: \"6b154685-46b1-4748-a1b1-a49fbabaa59e\") " Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.632588 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-utilities" (OuterVolumeSpecName: "utilities") pod "6b154685-46b1-4748-a1b1-a49fbabaa59e" (UID: "6b154685-46b1-4748-a1b1-a49fbabaa59e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.639169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b154685-46b1-4748-a1b1-a49fbabaa59e-kube-api-access-cc5jw" (OuterVolumeSpecName: "kube-api-access-cc5jw") pod "6b154685-46b1-4748-a1b1-a49fbabaa59e" (UID: "6b154685-46b1-4748-a1b1-a49fbabaa59e"). InnerVolumeSpecName "kube-api-access-cc5jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.708870 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b154685-46b1-4748-a1b1-a49fbabaa59e" (UID: "6b154685-46b1-4748-a1b1-a49fbabaa59e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.732344 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.732372 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b154685-46b1-4748-a1b1-a49fbabaa59e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.732382 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc5jw\" (UniqueName: \"kubernetes.io/projected/6b154685-46b1-4748-a1b1-a49fbabaa59e-kube-api-access-cc5jw\") on node \"crc\" DevicePath \"\"" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.834473 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.834551 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.834610 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.835302 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3192bd7c3c9bca9390d78d1b46500dbfdd985426c314e19d884c49319fcd666"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:22:33 crc kubenswrapper[4861]: I0219 14:22:33.835367 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://b3192bd7c3c9bca9390d78d1b46500dbfdd985426c314e19d884c49319fcd666" gracePeriod=600 Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.115316 4861 generic.go:334] "Generic (PLEG): container finished" podID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerID="b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460" exitCode=0 Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.115400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerDied","Data":"b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460"} Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.115891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfvl" event={"ID":"6b154685-46b1-4748-a1b1-a49fbabaa59e","Type":"ContainerDied","Data":"06e3b69eb41414b882150d25216aa3d8e9ad58ed5c557f65db9dcc142289d4e4"} Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.115493 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfvl" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.115943 4861 scope.go:117] "RemoveContainer" containerID="b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.120232 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="b3192bd7c3c9bca9390d78d1b46500dbfdd985426c314e19d884c49319fcd666" exitCode=0 Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.120291 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"b3192bd7c3c9bca9390d78d1b46500dbfdd985426c314e19d884c49319fcd666"} Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.149179 4861 scope.go:117] "RemoveContainer" containerID="562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.154225 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8nfvl"] Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.166056 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8nfvl"] Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.175438 4861 scope.go:117] "RemoveContainer" containerID="650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.191816 4861 scope.go:117] "RemoveContainer" containerID="b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460" Feb 19 14:22:34 crc kubenswrapper[4861]: E0219 14:22:34.192258 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460\": container with ID starting with b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460 not found: ID does not exist" containerID="b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.192314 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460"} err="failed to get container status \"b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460\": rpc error: code = NotFound desc = could not find container \"b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460\": container with ID starting with b9b3bac270ea17c6f3057b4bbe05c2bde37ba5db5ab40a03317db379801a2460 not found: ID does not exist" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.192349 4861 scope.go:117] "RemoveContainer" containerID="562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f" Feb 19 14:22:34 crc kubenswrapper[4861]: E0219 14:22:34.192676 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f\": container with ID starting with 562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f not found: ID does not exist" containerID="562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.192718 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f"} err="failed to get container status \"562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f\": rpc error: code = NotFound desc = could not find container \"562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f\": container with ID starting with 562a3ae350d4543d4f6d09f4bb10d3e2287ddb7417bc74b36bf855d3dd63854f not found: ID does not exist" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.192748 4861 scope.go:117] "RemoveContainer" containerID="650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f" Feb 19 14:22:34 crc kubenswrapper[4861]: E0219 14:22:34.193137 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f\": container with ID starting with 650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f not found: ID does not exist" containerID="650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.193196 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f"} err="failed to get container status \"650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f\": rpc error: code = NotFound desc = could not find container \"650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f\": container with ID starting with 650e57cc243a172b2b152083a9912737cadef269676b693ab8d0192f0c50613f not found: ID does not exist" Feb 19 14:22:34 crc kubenswrapper[4861]: I0219 14:22:34.193215 4861 scope.go:117] "RemoveContainer" containerID="5e60c96bce15e6cbba95a6ae6d61098e78959c344dddfef7ece4398885f4cb67" Feb 19 14:22:35 crc kubenswrapper[4861]: I0219 14:22:35.136608 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f"} Feb 19 14:22:35 crc kubenswrapper[4861]: I0219 14:22:35.998852 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" path="/var/lib/kubelet/pods/6b154685-46b1-4748-a1b1-a49fbabaa59e/volumes" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.936231 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ds8km"] Feb 19 14:22:58 crc kubenswrapper[4861]: E0219 14:22:58.937273 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="registry-server" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.937289 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="registry-server" Feb 19 14:22:58 crc kubenswrapper[4861]: E0219 14:22:58.937322 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="extract-content" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.937332 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="extract-content" Feb 19 14:22:58 crc kubenswrapper[4861]: E0219 14:22:58.937359 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="extract-utilities" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.937368 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="extract-utilities" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.937808 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b154685-46b1-4748-a1b1-a49fbabaa59e" containerName="registry-server" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.940453 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:58 crc kubenswrapper[4861]: I0219 14:22:58.961949 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ds8km"] Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.085973 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-catalog-content\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.086315 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-utilities\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.086721 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57q48\" (UniqueName: \"kubernetes.io/projected/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-kube-api-access-57q48\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.189337 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-utilities\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.189484 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-utilities\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.189681 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57q48\" (UniqueName: \"kubernetes.io/projected/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-kube-api-access-57q48\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.190386 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-catalog-content\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.190992 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-catalog-content\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.217770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57q48\" (UniqueName: \"kubernetes.io/projected/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-kube-api-access-57q48\") pod \"certified-operators-ds8km\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.294829 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:22:59 crc kubenswrapper[4861]: I0219 14:22:59.796881 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ds8km"] Feb 19 14:23:00 crc kubenswrapper[4861]: I0219 14:23:00.381215 4861 generic.go:334] "Generic (PLEG): container finished" podID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerID="5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c" exitCode=0 Feb 19 14:23:00 crc kubenswrapper[4861]: I0219 14:23:00.381299 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds8km" event={"ID":"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b","Type":"ContainerDied","Data":"5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c"} Feb 19 14:23:00 crc kubenswrapper[4861]: I0219 14:23:00.381347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds8km" event={"ID":"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b","Type":"ContainerStarted","Data":"8d6bb394fb6da582a6424b89274219b35759f8e594d735d909f739228c19763a"} Feb 19 14:23:02 crc kubenswrapper[4861]: I0219 14:23:02.399666 4861 generic.go:334] "Generic (PLEG): container finished" podID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerID="84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f" exitCode=0 Feb 19 14:23:02 crc kubenswrapper[4861]: I0219 14:23:02.399778 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds8km" event={"ID":"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b","Type":"ContainerDied","Data":"84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f"} Feb 19 14:23:03 crc kubenswrapper[4861]: I0219 14:23:03.413956 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds8km" event={"ID":"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b","Type":"ContainerStarted","Data":"2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743"} Feb 19 14:23:03 crc kubenswrapper[4861]: I0219 14:23:03.439662 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ds8km" podStartSLOduration=3.038429727 podStartE2EDuration="5.439644373s" podCreationTimestamp="2026-02-19 14:22:58 +0000 UTC" firstStartedPulling="2026-02-19 14:23:00.383393264 +0000 UTC m=+4395.044496522" lastFinishedPulling="2026-02-19 14:23:02.78460793 +0000 UTC m=+4397.445711168" observedRunningTime="2026-02-19 14:23:03.435025299 +0000 UTC m=+4398.096128587" watchObservedRunningTime="2026-02-19 14:23:03.439644373 +0000 UTC m=+4398.100747611" Feb 19 14:23:09 crc kubenswrapper[4861]: I0219 14:23:09.295817 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:23:09 crc kubenswrapper[4861]: I0219 14:23:09.296511 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:23:09 crc kubenswrapper[4861]: I0219 14:23:09.367846 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:23:09 crc kubenswrapper[4861]: I0219 14:23:09.531367 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:23:12 crc kubenswrapper[4861]: I0219 14:23:12.900139 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ds8km"] Feb 19 14:23:12 crc kubenswrapper[4861]: I0219 14:23:12.900845 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ds8km" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="registry-server" containerID="cri-o://2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743" gracePeriod=2 Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.400936 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.504321 4861 generic.go:334] "Generic (PLEG): container finished" podID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerID="2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743" exitCode=0 Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.504364 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds8km" event={"ID":"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b","Type":"ContainerDied","Data":"2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743"} Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.504398 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds8km" event={"ID":"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b","Type":"ContainerDied","Data":"8d6bb394fb6da582a6424b89274219b35759f8e594d735d909f739228c19763a"} Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.504431 4861 scope.go:117] "RemoveContainer" containerID="2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.504413 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds8km" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.514265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57q48\" (UniqueName: \"kubernetes.io/projected/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-kube-api-access-57q48\") pod \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.514445 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-catalog-content\") pod \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.514536 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-utilities\") pod \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\" (UID: \"ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b\") " Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.515148 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-utilities" (OuterVolumeSpecName: "utilities") pod "ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" (UID: "ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.520811 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-kube-api-access-57q48" (OuterVolumeSpecName: "kube-api-access-57q48") pod "ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" (UID: "ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b"). InnerVolumeSpecName "kube-api-access-57q48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.525157 4861 scope.go:117] "RemoveContainer" containerID="84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.575501 4861 scope.go:117] "RemoveContainer" containerID="5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.576832 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" (UID: "ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.596555 4861 scope.go:117] "RemoveContainer" containerID="2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743" Feb 19 14:23:13 crc kubenswrapper[4861]: E0219 14:23:13.596973 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743\": container with ID starting with 2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743 not found: ID does not exist" containerID="2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.597006 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743"} err="failed to get container status \"2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743\": rpc error: code = NotFound desc = could not find container \"2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743\": container with ID starting with 2facd1da23a530ebe8df9981e2c9281332159ebcf6673be7ebdec7db22ee9743 not found: ID does not exist" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.597028 4861 scope.go:117] "RemoveContainer" containerID="84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f" Feb 19 14:23:13 crc kubenswrapper[4861]: E0219 14:23:13.597319 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f\": container with ID starting with 84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f not found: ID does not exist" containerID="84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.597338 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f"} err="failed to get container status \"84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f\": rpc error: code = NotFound desc = could not find container \"84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f\": container with ID starting with 84086a27bbb3375de97893a5bc3d58160e4341a04d259caa7246688f7f3cf97f not found: ID does not exist" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.597351 4861 scope.go:117] "RemoveContainer" containerID="5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c" Feb 19 14:23:13 crc kubenswrapper[4861]: E0219 14:23:13.597709 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c\": container with ID starting with 5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c not found: ID does not exist" containerID="5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.597729 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c"} err="failed to get container status \"5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c\": rpc error: code = NotFound desc = could not find container \"5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c\": container with ID starting with 5823f662a634d5a811e976252db1787d6feac9fb3eb172dc5dbaf469ebd0771c not found: ID does not exist" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.615518 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.615542 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57q48\" (UniqueName: \"kubernetes.io/projected/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-kube-api-access-57q48\") on node \"crc\" DevicePath \"\"" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.615551 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.855275 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ds8km"] Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.874515 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ds8km"] Feb 19 14:23:13 crc kubenswrapper[4861]: I0219 14:23:13.992363 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" path="/var/lib/kubelet/pods/ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b/volumes" Feb 19 14:25:03 crc kubenswrapper[4861]: I0219 14:25:03.834559 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:25:03 crc kubenswrapper[4861]: I0219 14:25:03.835161 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:25:33 crc kubenswrapper[4861]: I0219 14:25:33.834639 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:25:33 crc kubenswrapper[4861]: I0219 14:25:33.835394 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:26:03 crc kubenswrapper[4861]: I0219 14:26:03.835232 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:26:03 crc kubenswrapper[4861]: I0219 14:26:03.836282 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:26:03 crc kubenswrapper[4861]: I0219 14:26:03.836355 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:26:03 crc kubenswrapper[4861]: I0219 14:26:03.837372 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:26:03 crc kubenswrapper[4861]: I0219 14:26:03.837497 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" gracePeriod=600 Feb 19 14:26:03 crc kubenswrapper[4861]: E0219 14:26:03.970462 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:26:04 crc kubenswrapper[4861]: I0219 14:26:04.061859 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" exitCode=0 Feb 19 14:26:04 crc kubenswrapper[4861]: I0219 14:26:04.061911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f"} Feb 19 14:26:04 crc kubenswrapper[4861]: I0219 14:26:04.061956 4861 scope.go:117] "RemoveContainer" containerID="b3192bd7c3c9bca9390d78d1b46500dbfdd985426c314e19d884c49319fcd666" Feb 19 14:26:04 crc kubenswrapper[4861]: I0219 14:26:04.062808 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:26:04 crc kubenswrapper[4861]: E0219 14:26:04.063239 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:26:18 crc kubenswrapper[4861]: I0219 14:26:18.977627 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:26:18 crc kubenswrapper[4861]: E0219 14:26:18.978658 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:26:31 crc kubenswrapper[4861]: I0219 14:26:31.978877 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:26:31 crc kubenswrapper[4861]: E0219 14:26:31.980021 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:26:42 crc kubenswrapper[4861]: I0219 14:26:42.978142 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:26:42 crc kubenswrapper[4861]: E0219 14:26:42.979464 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:26:56 crc kubenswrapper[4861]: I0219 14:26:56.977079 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:26:56 crc kubenswrapper[4861]: E0219 14:26:56.977945 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.657259 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9k4r"] Feb 19 14:27:01 crc kubenswrapper[4861]: E0219 14:27:01.658186 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="extract-utilities" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.658208 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="extract-utilities" Feb 19 14:27:01 crc kubenswrapper[4861]: E0219 14:27:01.658240 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="registry-server" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.658254 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="registry-server" Feb 19 14:27:01 crc kubenswrapper[4861]: E0219 14:27:01.658270 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="extract-content" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.658284 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="extract-content" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.658644 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad263c5c-0cc8-4e6b-a9e3-d35ba4538c8b" containerName="registry-server" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.660644 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.664129 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9k4r"] Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.755011 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pt2\" (UniqueName: \"kubernetes.io/projected/c6c697ef-dbeb-4842-9f81-95525a2c15ab-kube-api-access-k5pt2\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.755065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-utilities\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.755176 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-catalog-content\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.856492 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pt2\" (UniqueName: \"kubernetes.io/projected/c6c697ef-dbeb-4842-9f81-95525a2c15ab-kube-api-access-k5pt2\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.856552 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-utilities\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.856631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-catalog-content\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.857634 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-catalog-content\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.857655 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-utilities\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:01 crc kubenswrapper[4861]: I0219 14:27:01.896687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pt2\" (UniqueName: \"kubernetes.io/projected/c6c697ef-dbeb-4842-9f81-95525a2c15ab-kube-api-access-k5pt2\") pod \"redhat-operators-t9k4r\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:02 crc kubenswrapper[4861]: I0219 14:27:02.003070 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:02 crc kubenswrapper[4861]: I0219 14:27:02.469860 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9k4r"] Feb 19 14:27:02 crc kubenswrapper[4861]: I0219 14:27:02.614658 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerStarted","Data":"6169faddc3d0dea781b422575a8db56508e80ed93c3b4d06d6db03b467ced2be"} Feb 19 14:27:03 crc kubenswrapper[4861]: I0219 14:27:03.621524 4861 generic.go:334] "Generic (PLEG): container finished" podID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerID="6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2" exitCode=0 Feb 19 14:27:03 crc kubenswrapper[4861]: I0219 14:27:03.621594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerDied","Data":"6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2"} Feb 19 14:27:03 crc kubenswrapper[4861]: I0219 14:27:03.623695 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:27:05 crc kubenswrapper[4861]: I0219 14:27:05.641136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerStarted","Data":"8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf"} Feb 19 14:27:06 crc kubenswrapper[4861]: I0219 14:27:06.655594 4861 generic.go:334] "Generic (PLEG): container finished" podID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerID="8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf" exitCode=0 Feb 19 14:27:06 crc kubenswrapper[4861]: I0219 14:27:06.655658 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerDied","Data":"8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf"} Feb 19 14:27:08 crc kubenswrapper[4861]: I0219 14:27:08.686212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerStarted","Data":"ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8"} Feb 19 14:27:08 crc kubenswrapper[4861]: I0219 14:27:08.717995 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9k4r" podStartSLOduration=3.820689999 podStartE2EDuration="7.717977994s" podCreationTimestamp="2026-02-19 14:27:01 +0000 UTC" firstStartedPulling="2026-02-19 14:27:03.623448562 +0000 UTC m=+4638.284551790" lastFinishedPulling="2026-02-19 14:27:07.520736517 +0000 UTC m=+4642.181839785" observedRunningTime="2026-02-19 14:27:08.711734625 +0000 UTC m=+4643.372837873" watchObservedRunningTime="2026-02-19 14:27:08.717977994 +0000 UTC m=+4643.379081222" Feb 19 14:27:10 crc kubenswrapper[4861]: I0219 14:27:10.977211 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:27:10 crc kubenswrapper[4861]: E0219 14:27:10.977919 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:27:12 crc kubenswrapper[4861]: I0219 14:27:12.003746 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:12 crc kubenswrapper[4861]: I0219 14:27:12.003802 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:13 crc kubenswrapper[4861]: I0219 14:27:13.065919 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9k4r" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="registry-server" probeResult="failure" output=< Feb 19 14:27:13 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:27:13 crc kubenswrapper[4861]: > Feb 19 14:27:22 crc kubenswrapper[4861]: I0219 14:27:22.049486 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:22 crc kubenswrapper[4861]: I0219 14:27:22.098496 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:22 crc kubenswrapper[4861]: I0219 14:27:22.284394 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9k4r"] Feb 19 14:27:23 crc kubenswrapper[4861]: I0219 14:27:23.806047 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9k4r" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="registry-server" containerID="cri-o://ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8" gracePeriod=2 Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.215721 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.311333 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-catalog-content\") pod \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.311462 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5pt2\" (UniqueName: \"kubernetes.io/projected/c6c697ef-dbeb-4842-9f81-95525a2c15ab-kube-api-access-k5pt2\") pod \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.311531 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-utilities\") pod \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\" (UID: \"c6c697ef-dbeb-4842-9f81-95525a2c15ab\") " Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.312738 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-utilities" (OuterVolumeSpecName: "utilities") pod "c6c697ef-dbeb-4842-9f81-95525a2c15ab" (UID: "c6c697ef-dbeb-4842-9f81-95525a2c15ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.318451 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c697ef-dbeb-4842-9f81-95525a2c15ab-kube-api-access-k5pt2" (OuterVolumeSpecName: "kube-api-access-k5pt2") pod "c6c697ef-dbeb-4842-9f81-95525a2c15ab" (UID: "c6c697ef-dbeb-4842-9f81-95525a2c15ab"). InnerVolumeSpecName "kube-api-access-k5pt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.413160 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5pt2\" (UniqueName: \"kubernetes.io/projected/c6c697ef-dbeb-4842-9f81-95525a2c15ab-kube-api-access-k5pt2\") on node \"crc\" DevicePath \"\"" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.413200 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.495877 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6c697ef-dbeb-4842-9f81-95525a2c15ab" (UID: "c6c697ef-dbeb-4842-9f81-95525a2c15ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.514648 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c697ef-dbeb-4842-9f81-95525a2c15ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.822410 4861 generic.go:334] "Generic (PLEG): container finished" podID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerID="ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8" exitCode=0 Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.822492 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerDied","Data":"ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8"} Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.822586 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9k4r" event={"ID":"c6c697ef-dbeb-4842-9f81-95525a2c15ab","Type":"ContainerDied","Data":"6169faddc3d0dea781b422575a8db56508e80ed93c3b4d06d6db03b467ced2be"} Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.822628 4861 scope.go:117] "RemoveContainer" containerID="ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.822532 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9k4r" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.851762 4861 scope.go:117] "RemoveContainer" containerID="8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.880753 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9k4r"] Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.890969 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9k4r"] Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.900644 4861 scope.go:117] "RemoveContainer" containerID="6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.939290 4861 scope.go:117] "RemoveContainer" containerID="ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8" Feb 19 14:27:24 crc kubenswrapper[4861]: E0219 14:27:24.940279 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8\": container with ID starting with ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8 not found: ID does not exist" containerID="ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.940329 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8"} err="failed to get container status \"ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8\": rpc error: code = NotFound desc = could not find container \"ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8\": container with ID starting with ca28878cb1ae8632dd1af0f093e321be73f1097af79c45af94fa1936b5a6e5b8 not found: ID does not exist" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.940363 4861 scope.go:117] "RemoveContainer" containerID="8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf" Feb 19 14:27:24 crc kubenswrapper[4861]: E0219 14:27:24.940885 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf\": container with ID starting with 8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf not found: ID does not exist" containerID="8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.940927 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf"} err="failed to get container status \"8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf\": rpc error: code = NotFound desc = could not find container \"8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf\": container with ID starting with 8b2d1bd04a531464556065cdc2a906539714d6767ba4bbdbd21c1f72457abdbf not found: ID does not exist" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.941000 4861 scope.go:117] "RemoveContainer" containerID="6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2" Feb 19 14:27:24 crc kubenswrapper[4861]: E0219 14:27:24.941528 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2\": container with ID starting with 6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2 not found: ID does not exist" containerID="6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.941574 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2"} err="failed to get container status \"6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2\": rpc error: code = NotFound desc = could not find container \"6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2\": container with ID starting with 6d0fddba746e620c649915bd391723af845a113b64555b63ee094a64c33aa1e2 not found: ID does not exist" Feb 19 14:27:24 crc kubenswrapper[4861]: I0219 14:27:24.976777 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:27:24 crc kubenswrapper[4861]: E0219 14:27:24.977018 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:27:25 crc kubenswrapper[4861]: I0219 14:27:25.985689 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" path="/var/lib/kubelet/pods/c6c697ef-dbeb-4842-9f81-95525a2c15ab/volumes" Feb 19 14:27:37 crc kubenswrapper[4861]: I0219 14:27:37.977260 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:27:37 crc kubenswrapper[4861]: E0219 14:27:37.978576 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:27:51 crc kubenswrapper[4861]: I0219 14:27:51.977382 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:27:51 crc kubenswrapper[4861]: E0219 14:27:51.978378 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:28:04 crc kubenswrapper[4861]: I0219 14:28:04.977724 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:28:04 crc kubenswrapper[4861]: E0219 14:28:04.978807 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:28:18 crc kubenswrapper[4861]: I0219 14:28:18.977237 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:28:18 crc kubenswrapper[4861]: E0219 14:28:18.978113 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:28:31 crc kubenswrapper[4861]: I0219 14:28:31.978331 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:28:31 crc kubenswrapper[4861]: E0219 14:28:31.979201 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:28:43 crc kubenswrapper[4861]: I0219 14:28:43.976995 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:28:43 crc kubenswrapper[4861]: E0219 14:28:43.979313 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:28:54 crc kubenswrapper[4861]: I0219 14:28:54.977986 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:28:54 crc kubenswrapper[4861]: E0219 14:28:54.979022 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:29:05 crc kubenswrapper[4861]: I0219 14:29:05.980030 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:29:05 crc kubenswrapper[4861]: E0219 14:29:05.980688 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:29:17 crc kubenswrapper[4861]: I0219 14:29:17.976952 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:29:17 crc kubenswrapper[4861]: E0219 14:29:17.977681 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:29:30 crc kubenswrapper[4861]: I0219 14:29:30.977040 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:29:30 crc kubenswrapper[4861]: E0219 14:29:30.978284 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:29:45 crc kubenswrapper[4861]: I0219 14:29:45.984530 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:29:45 crc kubenswrapper[4861]: E0219 14:29:45.985565 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:29:58 crc kubenswrapper[4861]: I0219 14:29:58.977828 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:29:58 crc kubenswrapper[4861]: E0219 14:29:58.987291 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.598366 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r"] Feb 19 14:30:00 crc kubenswrapper[4861]: E0219 14:30:00.598856 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="extract-utilities" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.598868 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="extract-utilities" Feb 19 14:30:00 crc kubenswrapper[4861]: E0219 14:30:00.598890 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="registry-server" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.598896 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="registry-server" Feb 19 14:30:00 crc kubenswrapper[4861]: E0219 14:30:00.598907 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="extract-content" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.598912 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="extract-content" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.599065 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c697ef-dbeb-4842-9f81-95525a2c15ab" containerName="registry-server" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.599474 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.601741 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.602050 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.646664 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r"] Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.719397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63c72b22-e089-4c38-99de-64bdd1553445-config-volume\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.719489 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63c72b22-e089-4c38-99de-64bdd1553445-secret-volume\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.719524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbv8m\" (UniqueName: \"kubernetes.io/projected/63c72b22-e089-4c38-99de-64bdd1553445-kube-api-access-vbv8m\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.821211 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63c72b22-e089-4c38-99de-64bdd1553445-config-volume\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.821266 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63c72b22-e089-4c38-99de-64bdd1553445-secret-volume\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.821290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbv8m\" (UniqueName: \"kubernetes.io/projected/63c72b22-e089-4c38-99de-64bdd1553445-kube-api-access-vbv8m\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.823034 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63c72b22-e089-4c38-99de-64bdd1553445-config-volume\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.827825 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63c72b22-e089-4c38-99de-64bdd1553445-secret-volume\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.854277 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbv8m\" (UniqueName: \"kubernetes.io/projected/63c72b22-e089-4c38-99de-64bdd1553445-kube-api-access-vbv8m\") pod \"collect-profiles-29525190-sf94r\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:00 crc kubenswrapper[4861]: I0219 14:30:00.922234 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:01 crc kubenswrapper[4861]: I0219 14:30:01.147955 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r"] Feb 19 14:30:01 crc kubenswrapper[4861]: I0219 14:30:01.254282 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" event={"ID":"63c72b22-e089-4c38-99de-64bdd1553445","Type":"ContainerStarted","Data":"5b3c051a1eec53bb5bda0b5cafc27f0bcc2a31e79b72b7fe4b983bf1c0fec54f"} Feb 19 14:30:02 crc kubenswrapper[4861]: I0219 14:30:02.264077 4861 generic.go:334] "Generic (PLEG): container finished" podID="63c72b22-e089-4c38-99de-64bdd1553445" containerID="56d9dc8b411b04bb2f3cbdc60c208748bd7be020d08b21530a2123f2101602ba" exitCode=0 Feb 19 14:30:02 crc kubenswrapper[4861]: I0219 14:30:02.264149 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" event={"ID":"63c72b22-e089-4c38-99de-64bdd1553445","Type":"ContainerDied","Data":"56d9dc8b411b04bb2f3cbdc60c208748bd7be020d08b21530a2123f2101602ba"} Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.651210 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.770804 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63c72b22-e089-4c38-99de-64bdd1553445-secret-volume\") pod \"63c72b22-e089-4c38-99de-64bdd1553445\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.770907 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbv8m\" (UniqueName: \"kubernetes.io/projected/63c72b22-e089-4c38-99de-64bdd1553445-kube-api-access-vbv8m\") pod \"63c72b22-e089-4c38-99de-64bdd1553445\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.771135 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63c72b22-e089-4c38-99de-64bdd1553445-config-volume\") pod \"63c72b22-e089-4c38-99de-64bdd1553445\" (UID: \"63c72b22-e089-4c38-99de-64bdd1553445\") " Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.772536 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c72b22-e089-4c38-99de-64bdd1553445-config-volume" (OuterVolumeSpecName: "config-volume") pod "63c72b22-e089-4c38-99de-64bdd1553445" (UID: "63c72b22-e089-4c38-99de-64bdd1553445"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.776599 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c72b22-e089-4c38-99de-64bdd1553445-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63c72b22-e089-4c38-99de-64bdd1553445" (UID: "63c72b22-e089-4c38-99de-64bdd1553445"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.777387 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c72b22-e089-4c38-99de-64bdd1553445-kube-api-access-vbv8m" (OuterVolumeSpecName: "kube-api-access-vbv8m") pod "63c72b22-e089-4c38-99de-64bdd1553445" (UID: "63c72b22-e089-4c38-99de-64bdd1553445"). InnerVolumeSpecName "kube-api-access-vbv8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.872924 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63c72b22-e089-4c38-99de-64bdd1553445-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.872961 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbv8m\" (UniqueName: \"kubernetes.io/projected/63c72b22-e089-4c38-99de-64bdd1553445-kube-api-access-vbv8m\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:03 crc kubenswrapper[4861]: I0219 14:30:03.872971 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63c72b22-e089-4c38-99de-64bdd1553445-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:04 crc kubenswrapper[4861]: I0219 14:30:04.284018 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" event={"ID":"63c72b22-e089-4c38-99de-64bdd1553445","Type":"ContainerDied","Data":"5b3c051a1eec53bb5bda0b5cafc27f0bcc2a31e79b72b7fe4b983bf1c0fec54f"} Feb 19 14:30:04 crc kubenswrapper[4861]: I0219 14:30:04.284098 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3c051a1eec53bb5bda0b5cafc27f0bcc2a31e79b72b7fe4b983bf1c0fec54f" Feb 19 14:30:04 crc kubenswrapper[4861]: I0219 14:30:04.284213 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r" Feb 19 14:30:04 crc kubenswrapper[4861]: I0219 14:30:04.748236 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5"] Feb 19 14:30:04 crc kubenswrapper[4861]: I0219 14:30:04.758141 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-f6xr5"] Feb 19 14:30:05 crc kubenswrapper[4861]: I0219 14:30:05.987265 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55db1743-1443-4183-ad09-bd3cc60e2ffe" path="/var/lib/kubelet/pods/55db1743-1443-4183-ad09-bd3cc60e2ffe/volumes" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.338060 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kxkfl"] Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.348324 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kxkfl"] Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.477533 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-sm2l2"] Feb 19 14:30:10 crc kubenswrapper[4861]: E0219 14:30:10.478648 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c72b22-e089-4c38-99de-64bdd1553445" containerName="collect-profiles" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.479060 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c72b22-e089-4c38-99de-64bdd1553445" containerName="collect-profiles" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.479585 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c72b22-e089-4c38-99de-64bdd1553445" containerName="collect-profiles" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.480859 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.486379 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.486687 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.486931 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.487908 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sm2l2"] Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.488852 4861 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j7gqd" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.582034 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b9af830b-0998-4458-a108-767c4d5e7f51-crc-storage\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.582485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b9af830b-0998-4458-a108-767c4d5e7f51-node-mnt\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.582711 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wcs2\" (UniqueName: \"kubernetes.io/projected/b9af830b-0998-4458-a108-767c4d5e7f51-kube-api-access-5wcs2\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.684341 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b9af830b-0998-4458-a108-767c4d5e7f51-crc-storage\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.684453 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b9af830b-0998-4458-a108-767c4d5e7f51-node-mnt\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.684524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wcs2\" (UniqueName: \"kubernetes.io/projected/b9af830b-0998-4458-a108-767c4d5e7f51-kube-api-access-5wcs2\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.684936 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b9af830b-0998-4458-a108-767c4d5e7f51-node-mnt\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.686392 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b9af830b-0998-4458-a108-767c4d5e7f51-crc-storage\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.722385 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wcs2\" (UniqueName: \"kubernetes.io/projected/b9af830b-0998-4458-a108-767c4d5e7f51-kube-api-access-5wcs2\") pod \"crc-storage-crc-sm2l2\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:10 crc kubenswrapper[4861]: I0219 14:30:10.852162 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:11 crc kubenswrapper[4861]: I0219 14:30:11.373531 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sm2l2"] Feb 19 14:30:12 crc kubenswrapper[4861]: I0219 14:30:11.998325 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4897ab39-1e1b-4631-be09-9b89a965f415" path="/var/lib/kubelet/pods/4897ab39-1e1b-4631-be09-9b89a965f415/volumes" Feb 19 14:30:12 crc kubenswrapper[4861]: I0219 14:30:12.364972 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sm2l2" event={"ID":"b9af830b-0998-4458-a108-767c4d5e7f51","Type":"ContainerStarted","Data":"5024d127ccfb4712118b054f67084a25c3b047106a5c3bcb3039f88de9d26f03"} Feb 19 14:30:12 crc kubenswrapper[4861]: I0219 14:30:12.365387 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sm2l2" event={"ID":"b9af830b-0998-4458-a108-767c4d5e7f51","Type":"ContainerStarted","Data":"9257e7462f936e4098601d519ff15250c5f7a887f4e2f9cb5f70bf80cfbbc40a"} Feb 19 14:30:12 crc kubenswrapper[4861]: I0219 14:30:12.393179 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-sm2l2" podStartSLOduration=1.819210482 podStartE2EDuration="2.393148781s" podCreationTimestamp="2026-02-19 14:30:10 +0000 UTC" firstStartedPulling="2026-02-19 14:30:11.383017945 +0000 UTC m=+4826.044121213" lastFinishedPulling="2026-02-19 14:30:11.956956254 +0000 UTC m=+4826.618059512" observedRunningTime="2026-02-19 14:30:12.386574523 +0000 UTC m=+4827.047677791" watchObservedRunningTime="2026-02-19 14:30:12.393148781 +0000 UTC m=+4827.054252039" Feb 19 14:30:12 crc kubenswrapper[4861]: E0219 14:30:12.613648 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9af830b_0998_4458_a108_767c4d5e7f51.slice/crio-conmon-5024d127ccfb4712118b054f67084a25c3b047106a5c3bcb3039f88de9d26f03.scope\": RecentStats: unable to find data in memory cache]" Feb 19 14:30:13 crc kubenswrapper[4861]: I0219 14:30:13.378837 4861 generic.go:334] "Generic (PLEG): container finished" podID="b9af830b-0998-4458-a108-767c4d5e7f51" containerID="5024d127ccfb4712118b054f67084a25c3b047106a5c3bcb3039f88de9d26f03" exitCode=0 Feb 19 14:30:13 crc kubenswrapper[4861]: I0219 14:30:13.378895 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sm2l2" event={"ID":"b9af830b-0998-4458-a108-767c4d5e7f51","Type":"ContainerDied","Data":"5024d127ccfb4712118b054f67084a25c3b047106a5c3bcb3039f88de9d26f03"} Feb 19 14:30:13 crc kubenswrapper[4861]: I0219 14:30:13.977358 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:30:13 crc kubenswrapper[4861]: E0219 14:30:13.978115 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.788922 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.856548 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b9af830b-0998-4458-a108-767c4d5e7f51-node-mnt\") pod \"b9af830b-0998-4458-a108-767c4d5e7f51\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.856906 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wcs2\" (UniqueName: \"kubernetes.io/projected/b9af830b-0998-4458-a108-767c4d5e7f51-kube-api-access-5wcs2\") pod \"b9af830b-0998-4458-a108-767c4d5e7f51\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.856772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9af830b-0998-4458-a108-767c4d5e7f51-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b9af830b-0998-4458-a108-767c4d5e7f51" (UID: "b9af830b-0998-4458-a108-767c4d5e7f51"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.857019 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b9af830b-0998-4458-a108-767c4d5e7f51-crc-storage\") pod \"b9af830b-0998-4458-a108-767c4d5e7f51\" (UID: \"b9af830b-0998-4458-a108-767c4d5e7f51\") " Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.857441 4861 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b9af830b-0998-4458-a108-767c4d5e7f51-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.864889 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9af830b-0998-4458-a108-767c4d5e7f51-kube-api-access-5wcs2" (OuterVolumeSpecName: "kube-api-access-5wcs2") pod "b9af830b-0998-4458-a108-767c4d5e7f51" (UID: "b9af830b-0998-4458-a108-767c4d5e7f51"). InnerVolumeSpecName "kube-api-access-5wcs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.888913 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9af830b-0998-4458-a108-767c4d5e7f51-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b9af830b-0998-4458-a108-767c4d5e7f51" (UID: "b9af830b-0998-4458-a108-767c4d5e7f51"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.958744 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wcs2\" (UniqueName: \"kubernetes.io/projected/b9af830b-0998-4458-a108-767c4d5e7f51-kube-api-access-5wcs2\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:14 crc kubenswrapper[4861]: I0219 14:30:14.958801 4861 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b9af830b-0998-4458-a108-767c4d5e7f51-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:15 crc kubenswrapper[4861]: I0219 14:30:15.401958 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sm2l2" event={"ID":"b9af830b-0998-4458-a108-767c4d5e7f51","Type":"ContainerDied","Data":"9257e7462f936e4098601d519ff15250c5f7a887f4e2f9cb5f70bf80cfbbc40a"} Feb 19 14:30:15 crc kubenswrapper[4861]: I0219 14:30:15.402022 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9257e7462f936e4098601d519ff15250c5f7a887f4e2f9cb5f70bf80cfbbc40a" Feb 19 14:30:15 crc kubenswrapper[4861]: I0219 14:30:15.402084 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sm2l2" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.789253 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-sm2l2"] Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.801288 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-sm2l2"] Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.949104 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-nz2v6"] Feb 19 14:30:16 crc kubenswrapper[4861]: E0219 14:30:16.949834 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9af830b-0998-4458-a108-767c4d5e7f51" containerName="storage" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.949874 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9af830b-0998-4458-a108-767c4d5e7f51" containerName="storage" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.950244 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9af830b-0998-4458-a108-767c4d5e7f51" containerName="storage" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.951379 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.957301 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.957404 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.958348 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.958857 4861 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j7gqd" Feb 19 14:30:16 crc kubenswrapper[4861]: I0219 14:30:16.964243 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nz2v6"] Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.103209 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ad984368-f40d-49ee-b1d3-01e8132b592a-crc-storage\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.103400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdf5\" (UniqueName: \"kubernetes.io/projected/ad984368-f40d-49ee-b1d3-01e8132b592a-kube-api-access-bsdf5\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.103531 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ad984368-f40d-49ee-b1d3-01e8132b592a-node-mnt\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.205137 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ad984368-f40d-49ee-b1d3-01e8132b592a-node-mnt\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.205348 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ad984368-f40d-49ee-b1d3-01e8132b592a-crc-storage\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.205463 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsdf5\" (UniqueName: \"kubernetes.io/projected/ad984368-f40d-49ee-b1d3-01e8132b592a-kube-api-access-bsdf5\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.205769 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ad984368-f40d-49ee-b1d3-01e8132b592a-node-mnt\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.207007 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ad984368-f40d-49ee-b1d3-01e8132b592a-crc-storage\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.238412 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsdf5\" (UniqueName: \"kubernetes.io/projected/ad984368-f40d-49ee-b1d3-01e8132b592a-kube-api-access-bsdf5\") pod \"crc-storage-crc-nz2v6\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.277941 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.801581 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nz2v6"] Feb 19 14:30:17 crc kubenswrapper[4861]: I0219 14:30:17.991753 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9af830b-0998-4458-a108-767c4d5e7f51" path="/var/lib/kubelet/pods/b9af830b-0998-4458-a108-767c4d5e7f51/volumes" Feb 19 14:30:18 crc kubenswrapper[4861]: I0219 14:30:18.434693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nz2v6" event={"ID":"ad984368-f40d-49ee-b1d3-01e8132b592a","Type":"ContainerStarted","Data":"49bae0b379f52aedcc75811c02eb018f6fe21a2de1d16cd67365fdb76fd4bca9"} Feb 19 14:30:18 crc kubenswrapper[4861]: I0219 14:30:18.435082 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nz2v6" event={"ID":"ad984368-f40d-49ee-b1d3-01e8132b592a","Type":"ContainerStarted","Data":"79a16a6f1edd135bce4feac4397b7242281a547aa7559fdd9e7d587aaf57b0b8"} Feb 19 14:30:18 crc kubenswrapper[4861]: I0219 14:30:18.457572 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-nz2v6" podStartSLOduration=2.03420635 podStartE2EDuration="2.457547371s" podCreationTimestamp="2026-02-19 14:30:16 +0000 UTC" firstStartedPulling="2026-02-19 14:30:17.799415383 +0000 UTC m=+4832.460518651" lastFinishedPulling="2026-02-19 14:30:18.222756434 +0000 UTC m=+4832.883859672" observedRunningTime="2026-02-19 14:30:18.453111912 +0000 UTC m=+4833.114215220" watchObservedRunningTime="2026-02-19 14:30:18.457547371 +0000 UTC m=+4833.118650629" Feb 19 14:30:19 crc kubenswrapper[4861]: I0219 14:30:19.446804 4861 generic.go:334] "Generic (PLEG): container finished" podID="ad984368-f40d-49ee-b1d3-01e8132b592a" containerID="49bae0b379f52aedcc75811c02eb018f6fe21a2de1d16cd67365fdb76fd4bca9" exitCode=0 Feb 19 14:30:19 crc kubenswrapper[4861]: I0219 14:30:19.446912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nz2v6" event={"ID":"ad984368-f40d-49ee-b1d3-01e8132b592a","Type":"ContainerDied","Data":"49bae0b379f52aedcc75811c02eb018f6fe21a2de1d16cd67365fdb76fd4bca9"} Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.817474 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.976999 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ad984368-f40d-49ee-b1d3-01e8132b592a-crc-storage\") pod \"ad984368-f40d-49ee-b1d3-01e8132b592a\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.977157 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ad984368-f40d-49ee-b1d3-01e8132b592a-node-mnt\") pod \"ad984368-f40d-49ee-b1d3-01e8132b592a\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.977230 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsdf5\" (UniqueName: \"kubernetes.io/projected/ad984368-f40d-49ee-b1d3-01e8132b592a-kube-api-access-bsdf5\") pod \"ad984368-f40d-49ee-b1d3-01e8132b592a\" (UID: \"ad984368-f40d-49ee-b1d3-01e8132b592a\") " Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.977379 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad984368-f40d-49ee-b1d3-01e8132b592a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "ad984368-f40d-49ee-b1d3-01e8132b592a" (UID: "ad984368-f40d-49ee-b1d3-01e8132b592a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.977916 4861 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ad984368-f40d-49ee-b1d3-01e8132b592a-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:20 crc kubenswrapper[4861]: I0219 14:30:20.995396 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad984368-f40d-49ee-b1d3-01e8132b592a-kube-api-access-bsdf5" (OuterVolumeSpecName: "kube-api-access-bsdf5") pod "ad984368-f40d-49ee-b1d3-01e8132b592a" (UID: "ad984368-f40d-49ee-b1d3-01e8132b592a"). InnerVolumeSpecName "kube-api-access-bsdf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:30:21 crc kubenswrapper[4861]: I0219 14:30:21.001826 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad984368-f40d-49ee-b1d3-01e8132b592a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "ad984368-f40d-49ee-b1d3-01e8132b592a" (UID: "ad984368-f40d-49ee-b1d3-01e8132b592a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:30:21 crc kubenswrapper[4861]: I0219 14:30:21.079212 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsdf5\" (UniqueName: \"kubernetes.io/projected/ad984368-f40d-49ee-b1d3-01e8132b592a-kube-api-access-bsdf5\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:21 crc kubenswrapper[4861]: I0219 14:30:21.079590 4861 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ad984368-f40d-49ee-b1d3-01e8132b592a-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 14:30:21 crc kubenswrapper[4861]: I0219 14:30:21.472615 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nz2v6" event={"ID":"ad984368-f40d-49ee-b1d3-01e8132b592a","Type":"ContainerDied","Data":"79a16a6f1edd135bce4feac4397b7242281a547aa7559fdd9e7d587aaf57b0b8"} Feb 19 14:30:21 crc kubenswrapper[4861]: I0219 14:30:21.472672 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a16a6f1edd135bce4feac4397b7242281a547aa7559fdd9e7d587aaf57b0b8" Feb 19 14:30:21 crc kubenswrapper[4861]: I0219 14:30:21.472727 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nz2v6" Feb 19 14:30:26 crc kubenswrapper[4861]: I0219 14:30:26.977639 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:30:26 crc kubenswrapper[4861]: E0219 14:30:26.978716 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:30:40 crc kubenswrapper[4861]: I0219 14:30:40.977888 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:30:40 crc kubenswrapper[4861]: E0219 14:30:40.979022 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:30:52 crc kubenswrapper[4861]: I0219 14:30:52.977904 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:30:52 crc kubenswrapper[4861]: E0219 14:30:52.978996 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.014893 4861 scope.go:117] "RemoveContainer" containerID="86ae0124ea474284237b6f2eeb6d744c770af1e939b8e36576e5f17981525868" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.056891 4861 scope.go:117] "RemoveContainer" containerID="2ffc7204b00d9bbf31a828b02efa1f14297b7b40801674c5cd5316926e17ce36" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.409196 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb84"] Feb 19 14:31:02 crc kubenswrapper[4861]: E0219 14:31:02.409646 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad984368-f40d-49ee-b1d3-01e8132b592a" containerName="storage" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.409677 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad984368-f40d-49ee-b1d3-01e8132b592a" containerName="storage" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.409967 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad984368-f40d-49ee-b1d3-01e8132b592a" containerName="storage" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.411595 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.430466 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb84"] Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.513763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-utilities\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.513833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-catalog-content\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.513877 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7vtg\" (UniqueName: \"kubernetes.io/projected/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-kube-api-access-f7vtg\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.614903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-utilities\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.615232 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-catalog-content\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.615374 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7vtg\" (UniqueName: \"kubernetes.io/projected/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-kube-api-access-f7vtg\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.615519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-utilities\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.615823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-catalog-content\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.640844 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7vtg\" (UniqueName: \"kubernetes.io/projected/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-kube-api-access-f7vtg\") pod \"redhat-marketplace-hnb84\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.747650 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:02 crc kubenswrapper[4861]: I0219 14:31:02.963841 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb84"] Feb 19 14:31:03 crc kubenswrapper[4861]: I0219 14:31:03.879707 4861 generic.go:334] "Generic (PLEG): container finished" podID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerID="03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711" exitCode=0 Feb 19 14:31:03 crc kubenswrapper[4861]: I0219 14:31:03.879835 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb84" event={"ID":"3bfa9b04-d3e8-431d-9ebc-325f50b94fea","Type":"ContainerDied","Data":"03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711"} Feb 19 14:31:03 crc kubenswrapper[4861]: I0219 14:31:03.880142 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb84" event={"ID":"3bfa9b04-d3e8-431d-9ebc-325f50b94fea","Type":"ContainerStarted","Data":"b42f71252496b0c220d78379a511e45a4836a7ff466675632fb1fc401ff3be75"} Feb 19 14:31:04 crc kubenswrapper[4861]: I0219 14:31:04.891567 4861 generic.go:334] "Generic (PLEG): container finished" podID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerID="800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7" exitCode=0 Feb 19 14:31:04 crc kubenswrapper[4861]: I0219 14:31:04.891693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb84" event={"ID":"3bfa9b04-d3e8-431d-9ebc-325f50b94fea","Type":"ContainerDied","Data":"800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7"} Feb 19 14:31:04 crc kubenswrapper[4861]: I0219 14:31:04.976439 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:31:05 crc kubenswrapper[4861]: I0219 14:31:05.903014 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb84" event={"ID":"3bfa9b04-d3e8-431d-9ebc-325f50b94fea","Type":"ContainerStarted","Data":"74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df"} Feb 19 14:31:05 crc kubenswrapper[4861]: I0219 14:31:05.905210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"56bbaab0b2802fda73b95514c34af1462b3e65c35d413a549965bd83c5d35b4f"} Feb 19 14:31:05 crc kubenswrapper[4861]: I0219 14:31:05.930840 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnb84" podStartSLOduration=2.3076120319999998 podStartE2EDuration="3.930811841s" podCreationTimestamp="2026-02-19 14:31:02 +0000 UTC" firstStartedPulling="2026-02-19 14:31:03.88198738 +0000 UTC m=+4878.543090618" lastFinishedPulling="2026-02-19 14:31:05.505187199 +0000 UTC m=+4880.166290427" observedRunningTime="2026-02-19 14:31:05.921899361 +0000 UTC m=+4880.583002599" watchObservedRunningTime="2026-02-19 14:31:05.930811841 +0000 UTC m=+4880.591915079" Feb 19 14:31:12 crc kubenswrapper[4861]: I0219 14:31:12.747802 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:12 crc kubenswrapper[4861]: I0219 14:31:12.749867 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:12 crc kubenswrapper[4861]: I0219 14:31:12.824982 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:13 crc kubenswrapper[4861]: I0219 14:31:13.045811 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:13 crc kubenswrapper[4861]: I0219 14:31:13.120560 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb84"] Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.001531 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnb84" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="registry-server" containerID="cri-o://74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df" gracePeriod=2 Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.495751 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.647570 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-utilities\") pod \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.648122 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-catalog-content\") pod \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.648405 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7vtg\" (UniqueName: \"kubernetes.io/projected/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-kube-api-access-f7vtg\") pod \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\" (UID: \"3bfa9b04-d3e8-431d-9ebc-325f50b94fea\") " Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.648781 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-utilities" (OuterVolumeSpecName: "utilities") pod "3bfa9b04-d3e8-431d-9ebc-325f50b94fea" (UID: "3bfa9b04-d3e8-431d-9ebc-325f50b94fea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.649577 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.660923 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-kube-api-access-f7vtg" (OuterVolumeSpecName: "kube-api-access-f7vtg") pod "3bfa9b04-d3e8-431d-9ebc-325f50b94fea" (UID: "3bfa9b04-d3e8-431d-9ebc-325f50b94fea"). InnerVolumeSpecName "kube-api-access-f7vtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.698760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bfa9b04-d3e8-431d-9ebc-325f50b94fea" (UID: "3bfa9b04-d3e8-431d-9ebc-325f50b94fea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.750826 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:31:15 crc kubenswrapper[4861]: I0219 14:31:15.750885 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7vtg\" (UniqueName: \"kubernetes.io/projected/3bfa9b04-d3e8-431d-9ebc-325f50b94fea-kube-api-access-f7vtg\") on node \"crc\" DevicePath \"\"" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.013895 4861 generic.go:334] "Generic (PLEG): container finished" podID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerID="74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df" exitCode=0 Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.013961 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb84" event={"ID":"3bfa9b04-d3e8-431d-9ebc-325f50b94fea","Type":"ContainerDied","Data":"74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df"} Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.013995 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb84" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.014027 4861 scope.go:117] "RemoveContainer" containerID="74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.014003 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb84" event={"ID":"3bfa9b04-d3e8-431d-9ebc-325f50b94fea","Type":"ContainerDied","Data":"b42f71252496b0c220d78379a511e45a4836a7ff466675632fb1fc401ff3be75"} Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.042893 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb84"] Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.050435 4861 scope.go:117] "RemoveContainer" containerID="800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.050481 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb84"] Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.078159 4861 scope.go:117] "RemoveContainer" containerID="03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.113156 4861 scope.go:117] "RemoveContainer" containerID="74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df" Feb 19 14:31:16 crc kubenswrapper[4861]: E0219 14:31:16.123483 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df\": container with ID starting with 74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df not found: ID does not exist" containerID="74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.123558 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df"} err="failed to get container status \"74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df\": rpc error: code = NotFound desc = could not find container \"74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df\": container with ID starting with 74c3dca838686fefd3c2971f86cf9c46c63d7fb77b0a3a53fbd017b44b9492df not found: ID does not exist" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.123614 4861 scope.go:117] "RemoveContainer" containerID="800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7" Feb 19 14:31:16 crc kubenswrapper[4861]: E0219 14:31:16.124232 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7\": container with ID starting with 800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7 not found: ID does not exist" containerID="800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.124302 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7"} err="failed to get container status \"800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7\": rpc error: code = NotFound desc = could not find container \"800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7\": container with ID starting with 800a0d11263fc9d3317d2c02939116fa51f4a121d1dab127ac2d2b7af57ce6c7 not found: ID does not exist" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.124347 4861 scope.go:117] "RemoveContainer" containerID="03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711" Feb 19 14:31:16 crc kubenswrapper[4861]: E0219 14:31:16.125059 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711\": container with ID starting with 03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711 not found: ID does not exist" containerID="03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711" Feb 19 14:31:16 crc kubenswrapper[4861]: I0219 14:31:16.125132 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711"} err="failed to get container status \"03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711\": rpc error: code = NotFound desc = could not find container \"03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711\": container with ID starting with 03ccfd5de16a1341d7b448fe2097d283b0a91de320a6c9032221cfa7e6f2f711 not found: ID does not exist" Feb 19 14:31:17 crc kubenswrapper[4861]: I0219 14:31:17.993782 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" path="/var/lib/kubelet/pods/3bfa9b04-d3e8-431d-9ebc-325f50b94fea/volumes" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.683022 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f98b88745-dw5j8"] Feb 19 14:32:23 crc kubenswrapper[4861]: E0219 14:32:23.683797 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="registry-server" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.683808 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="registry-server" Feb 19 14:32:23 crc kubenswrapper[4861]: E0219 14:32:23.683822 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="extract-content" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.683827 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="extract-content" Feb 19 14:32:23 crc kubenswrapper[4861]: E0219 14:32:23.683851 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="extract-utilities" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.683857 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="extract-utilities" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.683980 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfa9b04-d3e8-431d-9ebc-325f50b94fea" containerName="registry-server" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.684635 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.689226 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.689475 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.689483 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.694361 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9d69655f7-fphtq"] Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.696016 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.698840 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-v5mf5" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.699015 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f98b88745-dw5j8"] Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.699021 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.714741 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d69655f7-fphtq"] Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.716633 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-config\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.716672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfw8\" (UniqueName: \"kubernetes.io/projected/030ee527-67d6-459d-8d66-e22229d02d08-kube-api-access-7lfw8\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.716848 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-dns-svc\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.716937 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61883837-79f6-4b54-95a6-8c6833ad34bf-config\") pod \"dnsmasq-dns-6f98b88745-dw5j8\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.717057 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkq46\" (UniqueName: \"kubernetes.io/projected/61883837-79f6-4b54-95a6-8c6833ad34bf-kube-api-access-zkq46\") pod \"dnsmasq-dns-6f98b88745-dw5j8\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.818339 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-config\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.818672 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfw8\" (UniqueName: \"kubernetes.io/projected/030ee527-67d6-459d-8d66-e22229d02d08-kube-api-access-7lfw8\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.818877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-dns-svc\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.819016 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61883837-79f6-4b54-95a6-8c6833ad34bf-config\") pod \"dnsmasq-dns-6f98b88745-dw5j8\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.821551 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkq46\" (UniqueName: \"kubernetes.io/projected/61883837-79f6-4b54-95a6-8c6833ad34bf-kube-api-access-zkq46\") pod \"dnsmasq-dns-6f98b88745-dw5j8\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.820270 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61883837-79f6-4b54-95a6-8c6833ad34bf-config\") pod \"dnsmasq-dns-6f98b88745-dw5j8\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.819459 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-config\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.824113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-dns-svc\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.895947 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f98b88745-dw5j8"] Feb 19 14:32:23 crc kubenswrapper[4861]: E0219 14:32:23.896628 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zkq46], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" podUID="61883837-79f6-4b54-95a6-8c6833ad34bf" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.925453 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.932737 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5699fdb769-s2hg7"] Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.933069 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.933913 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:23 crc kubenswrapper[4861]: I0219 14:32:23.953615 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5699fdb769-s2hg7"] Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.024982 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61883837-79f6-4b54-95a6-8c6833ad34bf-config\") pod \"61883837-79f6-4b54-95a6-8c6833ad34bf\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.025293 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-dns-svc\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.025347 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-config\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.025389 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61883837-79f6-4b54-95a6-8c6833ad34bf-config" (OuterVolumeSpecName: "config") pod "61883837-79f6-4b54-95a6-8c6833ad34bf" (UID: "61883837-79f6-4b54-95a6-8c6833ad34bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.025452 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzcnj\" (UniqueName: \"kubernetes.io/projected/45fa7b51-1099-4dd3-ba56-3615b336c694-kube-api-access-gzcnj\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.026007 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61883837-79f6-4b54-95a6-8c6833ad34bf-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.127327 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-config\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.127405 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzcnj\" (UniqueName: \"kubernetes.io/projected/45fa7b51-1099-4dd3-ba56-3615b336c694-kube-api-access-gzcnj\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.127482 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-dns-svc\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.128272 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-dns-svc\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.128584 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-config\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.296526 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfw8\" (UniqueName: \"kubernetes.io/projected/030ee527-67d6-459d-8d66-e22229d02d08-kube-api-access-7lfw8\") pod \"dnsmasq-dns-9d69655f7-fphtq\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.296538 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkq46\" (UniqueName: \"kubernetes.io/projected/61883837-79f6-4b54-95a6-8c6833ad34bf-kube-api-access-zkq46\") pod \"dnsmasq-dns-6f98b88745-dw5j8\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.297699 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzcnj\" (UniqueName: \"kubernetes.io/projected/45fa7b51-1099-4dd3-ba56-3615b336c694-kube-api-access-gzcnj\") pod \"dnsmasq-dns-5699fdb769-s2hg7\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.309711 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.331236 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkq46\" (UniqueName: \"kubernetes.io/projected/61883837-79f6-4b54-95a6-8c6833ad34bf-kube-api-access-zkq46\") pod \"61883837-79f6-4b54-95a6-8c6833ad34bf\" (UID: \"61883837-79f6-4b54-95a6-8c6833ad34bf\") " Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.334350 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61883837-79f6-4b54-95a6-8c6833ad34bf-kube-api-access-zkq46" (OuterVolumeSpecName: "kube-api-access-zkq46") pod "61883837-79f6-4b54-95a6-8c6833ad34bf" (UID: "61883837-79f6-4b54-95a6-8c6833ad34bf"). InnerVolumeSpecName "kube-api-access-zkq46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.441044 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkq46\" (UniqueName: \"kubernetes.io/projected/61883837-79f6-4b54-95a6-8c6833ad34bf-kube-api-access-zkq46\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.512519 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5699fdb769-s2hg7"] Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.513049 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.538451 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-jhlrf"] Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.540198 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.555624 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-jhlrf"] Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.645360 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-config\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.645400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-dns-svc\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.645524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74n6k\" (UniqueName: \"kubernetes.io/projected/45909a85-8b28-4c9e-bc14-4b5f0256792f-kube-api-access-74n6k\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.668461 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d69655f7-fphtq"] Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.747335 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-config\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.747619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-dns-svc\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.747693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74n6k\" (UniqueName: \"kubernetes.io/projected/45909a85-8b28-4c9e-bc14-4b5f0256792f-kube-api-access-74n6k\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.748723 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-config\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.749568 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-dns-svc\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.769345 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74n6k\" (UniqueName: \"kubernetes.io/projected/45909a85-8b28-4c9e-bc14-4b5f0256792f-kube-api-access-74n6k\") pod \"dnsmasq-dns-589cf688cc-jhlrf\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.902347 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.950663 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f98b88745-dw5j8" Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.953342 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" event={"ID":"030ee527-67d6-459d-8d66-e22229d02d08","Type":"ContainerStarted","Data":"7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb"} Feb 19 14:32:24 crc kubenswrapper[4861]: I0219 14:32:24.953379 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" event={"ID":"030ee527-67d6-459d-8d66-e22229d02d08","Type":"ContainerStarted","Data":"795f8686982e2260d7aeb8fb3610f0b873f63b96cb753b716330fd08117c2b0d"} Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.016467 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5699fdb769-s2hg7"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.026793 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f98b88745-dw5j8"] Feb 19 14:32:25 crc kubenswrapper[4861]: W0219 14:32:25.030567 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45fa7b51_1099_4dd3_ba56_3615b336c694.slice/crio-fb9bb84b5bd6eb969fffc9d6ddb6caf01747cbde2b25ff443d1e53e9b7ac0560 WatchSource:0}: Error finding container fb9bb84b5bd6eb969fffc9d6ddb6caf01747cbde2b25ff443d1e53e9b7ac0560: Status 404 returned error can't find the container with id fb9bb84b5bd6eb969fffc9d6ddb6caf01747cbde2b25ff443d1e53e9b7ac0560 Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.059716 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f98b88745-dw5j8"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.085840 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.088462 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.095564 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.095565 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.096726 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.096675 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.101382 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.101610 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.101625 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7bhvp" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.105302 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157541 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157583 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157605 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6277b46-234e-41fc-a3aa-12a8c020111f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157645 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157683 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157701 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6277b46-234e-41fc-a3aa-12a8c020111f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157719 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157741 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgcd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-kube-api-access-psgcd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.157802 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259685 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6277b46-234e-41fc-a3aa-12a8c020111f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259715 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259749 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgcd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-kube-api-access-psgcd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259881 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259930 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6277b46-234e-41fc-a3aa-12a8c020111f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.259983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.260853 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.261225 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.261854 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.262092 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.262463 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.264837 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6277b46-234e-41fc-a3aa-12a8c020111f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.264869 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.265631 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.265691 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4cc59c36db607d97947b675ec04df78c135f680d7e2e88615f93dc59e7ee1dac/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.266542 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.269153 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6277b46-234e-41fc-a3aa-12a8c020111f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.275611 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgcd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-kube-api-access-psgcd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.294194 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.418630 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.432318 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-jhlrf"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.703752 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.704792 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.706351 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.706899 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.706919 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.707096 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.707099 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.707190 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mcptc" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.707227 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.738611 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.867564 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-config-data\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.867976 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868010 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868058 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868094 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34ab3e0-1b78-4407-9a06-a61c65390b14-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868125 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp79k\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-kube-api-access-vp79k\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868157 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868203 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868223 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34ab3e0-1b78-4407-9a06-a61c65390b14-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868266 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.868305 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.963958 4861 generic.go:334] "Generic (PLEG): container finished" podID="45fa7b51-1099-4dd3-ba56-3615b336c694" containerID="6c7d69f9b639d09ccb13cbcfd636e1b74402b46f849685d58c0a942dda8cec22" exitCode=0 Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.964076 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" event={"ID":"45fa7b51-1099-4dd3-ba56-3615b336c694","Type":"ContainerDied","Data":"6c7d69f9b639d09ccb13cbcfd636e1b74402b46f849685d58c0a942dda8cec22"} Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.964129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" event={"ID":"45fa7b51-1099-4dd3-ba56-3615b336c694","Type":"ContainerStarted","Data":"fb9bb84b5bd6eb969fffc9d6ddb6caf01747cbde2b25ff443d1e53e9b7ac0560"} Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.966870 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" event={"ID":"45909a85-8b28-4c9e-bc14-4b5f0256792f","Type":"ContainerStarted","Data":"0986ef1f08ebad6551e555cefe59c5e1e31bc12794c1d16aaca19d38ef8b9c63"} Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.966915 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" event={"ID":"45909a85-8b28-4c9e-bc14-4b5f0256792f","Type":"ContainerStarted","Data":"1d26e356fbf1a28f4468638fee84949aaf5926c155163dc7374cdacc6c3e658a"} Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969037 4861 generic.go:334] "Generic (PLEG): container finished" podID="030ee527-67d6-459d-8d66-e22229d02d08" containerID="7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb" exitCode=0 Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969081 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" event={"ID":"030ee527-67d6-459d-8d66-e22229d02d08","Type":"ContainerDied","Data":"7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb"} Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969152 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969214 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-config-data\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969237 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969263 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969305 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969335 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34ab3e0-1b78-4407-9a06-a61c65390b14-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp79k\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-kube-api-access-vp79k\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969395 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969458 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969486 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34ab3e0-1b78-4407-9a06-a61c65390b14-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.969515 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.970050 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.970441 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.971064 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.972835 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-config-data\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.973261 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.975694 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.975716 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/963be3595212dd7ba9de40643baf551b75ffee4393dbed6f0bf088e5f6bd3faf/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.976958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:25 crc kubenswrapper[4861]: I0219 14:32:25.979202 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:25.995307 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34ab3e0-1b78-4407-9a06-a61c65390b14-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.000332 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61883837-79f6-4b54-95a6-8c6833ad34bf" path="/var/lib/kubelet/pods/61883837-79f6-4b54-95a6-8c6833ad34bf/volumes" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.028904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34ab3e0-1b78-4407-9a06-a61c65390b14-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.030086 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp79k\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-kube-api-access-vp79k\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.038099 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " pod="openstack/rabbitmq-server-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.062896 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.089385 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.247141 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.326885 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:32:26 crc kubenswrapper[4861]: W0219 14:32:26.334574 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf34ab3e0_1b78_4407_9a06_a61c65390b14.slice/crio-fc475de29d52812c65d81ce3820fd058d9753e9b8f98807600d5e45cb0d906a4 WatchSource:0}: Error finding container fc475de29d52812c65d81ce3820fd058d9753e9b8f98807600d5e45cb0d906a4: Status 404 returned error can't find the container with id fc475de29d52812c65d81ce3820fd058d9753e9b8f98807600d5e45cb0d906a4 Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.374758 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-config\") pod \"45fa7b51-1099-4dd3-ba56-3615b336c694\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.374859 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzcnj\" (UniqueName: \"kubernetes.io/projected/45fa7b51-1099-4dd3-ba56-3615b336c694-kube-api-access-gzcnj\") pod \"45fa7b51-1099-4dd3-ba56-3615b336c694\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.374893 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-dns-svc\") pod \"45fa7b51-1099-4dd3-ba56-3615b336c694\" (UID: \"45fa7b51-1099-4dd3-ba56-3615b336c694\") " Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.381235 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fa7b51-1099-4dd3-ba56-3615b336c694-kube-api-access-gzcnj" (OuterVolumeSpecName: "kube-api-access-gzcnj") pod "45fa7b51-1099-4dd3-ba56-3615b336c694" (UID: "45fa7b51-1099-4dd3-ba56-3615b336c694"). InnerVolumeSpecName "kube-api-access-gzcnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.392024 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-config" (OuterVolumeSpecName: "config") pod "45fa7b51-1099-4dd3-ba56-3615b336c694" (UID: "45fa7b51-1099-4dd3-ba56-3615b336c694"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.395264 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45fa7b51-1099-4dd3-ba56-3615b336c694" (UID: "45fa7b51-1099-4dd3-ba56-3615b336c694"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.476110 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzcnj\" (UniqueName: \"kubernetes.io/projected/45fa7b51-1099-4dd3-ba56-3615b336c694-kube-api-access-gzcnj\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.476272 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.476362 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fa7b51-1099-4dd3-ba56-3615b336c694-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.500323 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 14:32:26 crc kubenswrapper[4861]: E0219 14:32:26.500606 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fa7b51-1099-4dd3-ba56-3615b336c694" containerName="init" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.500622 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fa7b51-1099-4dd3-ba56-3615b336c694" containerName="init" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.500768 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fa7b51-1099-4dd3-ba56-3615b336c694" containerName="init" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.501441 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.503401 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.503557 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.505739 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-scn7t" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.506157 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.523328 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.536538 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.678991 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8766b935-3472-4f4b-b597-76af26470a29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8766b935-3472-4f4b-b597-76af26470a29\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679042 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389dd318-a4b4-4dfc-b307-d6790470f8a1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679078 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679103 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479qg\" (UniqueName: \"kubernetes.io/projected/389dd318-a4b4-4dfc-b307-d6790470f8a1-kube-api-access-479qg\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679123 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/389dd318-a4b4-4dfc-b307-d6790470f8a1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679297 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-config-data-default\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/389dd318-a4b4-4dfc-b307-d6790470f8a1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.679683 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-kolla-config\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781497 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/389dd318-a4b4-4dfc-b307-d6790470f8a1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781555 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-kolla-config\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781609 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8766b935-3472-4f4b-b597-76af26470a29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8766b935-3472-4f4b-b597-76af26470a29\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389dd318-a4b4-4dfc-b307-d6790470f8a1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781680 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781712 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479qg\" (UniqueName: \"kubernetes.io/projected/389dd318-a4b4-4dfc-b307-d6790470f8a1-kube-api-access-479qg\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781734 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/389dd318-a4b4-4dfc-b307-d6790470f8a1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.781759 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-config-data-default\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.782160 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/389dd318-a4b4-4dfc-b307-d6790470f8a1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.782821 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-kolla-config\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.783057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-config-data-default\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.783538 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/389dd318-a4b4-4dfc-b307-d6790470f8a1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.785673 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.785703 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8766b935-3472-4f4b-b597-76af26470a29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8766b935-3472-4f4b-b597-76af26470a29\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5517fcf4e5d8c46875452c411396cd93546e9ef673d6a085a4276b587296df9c/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.787115 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389dd318-a4b4-4dfc-b307-d6790470f8a1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.797272 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/389dd318-a4b4-4dfc-b307-d6790470f8a1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.805697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479qg\" (UniqueName: \"kubernetes.io/projected/389dd318-a4b4-4dfc-b307-d6790470f8a1-kube-api-access-479qg\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.983249 4861 generic.go:334] "Generic (PLEG): container finished" podID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerID="0986ef1f08ebad6551e555cefe59c5e1e31bc12794c1d16aaca19d38ef8b9c63" exitCode=0 Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.983328 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" event={"ID":"45909a85-8b28-4c9e-bc14-4b5f0256792f","Type":"ContainerDied","Data":"0986ef1f08ebad6551e555cefe59c5e1e31bc12794c1d16aaca19d38ef8b9c63"} Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.992912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" event={"ID":"030ee527-67d6-459d-8d66-e22229d02d08","Type":"ContainerStarted","Data":"defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62"} Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.993845 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:26 crc kubenswrapper[4861]: I0219 14:32:26.996383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f34ab3e0-1b78-4407-9a06-a61c65390b14","Type":"ContainerStarted","Data":"fc475de29d52812c65d81ce3820fd058d9753e9b8f98807600d5e45cb0d906a4"} Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:26.998253 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" event={"ID":"45fa7b51-1099-4dd3-ba56-3615b336c694","Type":"ContainerDied","Data":"fb9bb84b5bd6eb969fffc9d6ddb6caf01747cbde2b25ff443d1e53e9b7ac0560"} Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:26.998319 4861 scope.go:117] "RemoveContainer" containerID="6c7d69f9b639d09ccb13cbcfd636e1b74402b46f849685d58c0a942dda8cec22" Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:26.998503 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5699fdb769-s2hg7" Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.003792 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f6277b46-234e-41fc-a3aa-12a8c020111f","Type":"ContainerStarted","Data":"4da0bf27f0fbd482c905dd4ebf1e10f5f283822a785fdebfe8593a90d885a8bc"} Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.060007 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" podStartSLOduration=4.059974118 podStartE2EDuration="4.059974118s" podCreationTimestamp="2026-02-19 14:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:32:27.032875167 +0000 UTC m=+4961.693978395" watchObservedRunningTime="2026-02-19 14:32:27.059974118 +0000 UTC m=+4961.721077386" Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.080042 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5699fdb769-s2hg7"] Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.088312 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5699fdb769-s2hg7"] Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.121347 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8766b935-3472-4f4b-b597-76af26470a29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8766b935-3472-4f4b-b597-76af26470a29\") pod \"openstack-galera-0\" (UID: \"389dd318-a4b4-4dfc-b307-d6790470f8a1\") " pod="openstack/openstack-galera-0" Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.423393 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.988579 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fa7b51-1099-4dd3-ba56-3615b336c694" path="/var/lib/kubelet/pods/45fa7b51-1099-4dd3-ba56-3615b336c694/volumes" Feb 19 14:32:27 crc kubenswrapper[4861]: I0219 14:32:27.998653 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.019416 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f6277b46-234e-41fc-a3aa-12a8c020111f","Type":"ContainerStarted","Data":"727e343c0d53746bd08e9f5e1ac9a94f5e0aef7c43edc91aa9064806a6684d9b"} Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.026106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" event={"ID":"45909a85-8b28-4c9e-bc14-4b5f0256792f","Type":"ContainerStarted","Data":"1824b0a482586c59201474fd2f539a2c3cf1d060b6145e73e011320f1b1e8d1f"} Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.026866 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.030550 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f34ab3e0-1b78-4407-9a06-a61c65390b14","Type":"ContainerStarted","Data":"97fd45c46f613bd34f1e8dedb094888097899058df1093ed1b8a421f0370434c"} Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.085864 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" podStartSLOduration=4.085848227 podStartE2EDuration="4.085848227s" podCreationTimestamp="2026-02-19 14:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:32:28.085087597 +0000 UTC m=+4962.746190835" watchObservedRunningTime="2026-02-19 14:32:28.085848227 +0000 UTC m=+4962.746951455" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.149748 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.153142 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.156186 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9p9dk" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.156607 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.156749 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.156874 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.159610 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/083aff08-66c5-4e71-8f1e-6fabd56dab6c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314362 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083aff08-66c5-4e71-8f1e-6fabd56dab6c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314550 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/083aff08-66c5-4e71-8f1e-6fabd56dab6c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314606 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314635 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314686 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnjj\" (UniqueName: \"kubernetes.io/projected/083aff08-66c5-4e71-8f1e-6fabd56dab6c-kube-api-access-lhnjj\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.314754 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.382603 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.383495 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.385712 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.386185 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-h5467" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.386495 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.402930 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415599 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/083aff08-66c5-4e71-8f1e-6fabd56dab6c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415647 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415669 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnjj\" (UniqueName: \"kubernetes.io/projected/083aff08-66c5-4e71-8f1e-6fabd56dab6c-kube-api-access-lhnjj\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415781 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/083aff08-66c5-4e71-8f1e-6fabd56dab6c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083aff08-66c5-4e71-8f1e-6fabd56dab6c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.415832 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.416778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.416790 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.417035 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/083aff08-66c5-4e71-8f1e-6fabd56dab6c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.418150 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/083aff08-66c5-4e71-8f1e-6fabd56dab6c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.426805 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/083aff08-66c5-4e71-8f1e-6fabd56dab6c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.431066 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083aff08-66c5-4e71-8f1e-6fabd56dab6c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.431683 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.431712 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f6c7b4767d7f209eb50b78544c7d96a93a75fcea130bf614d55821aba250452e/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.438349 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnjj\" (UniqueName: \"kubernetes.io/projected/083aff08-66c5-4e71-8f1e-6fabd56dab6c-kube-api-access-lhnjj\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.466881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cbf96fb-94d3-429b-8c4e-b84be6d062fd\") pod \"openstack-cell1-galera-0\" (UID: \"083aff08-66c5-4e71-8f1e-6fabd56dab6c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.489948 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.516547 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.517044 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.517199 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzcq\" (UniqueName: \"kubernetes.io/projected/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-kube-api-access-6fzcq\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.517303 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-config-data\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.517490 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-kolla-config\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.619200 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-config-data\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.619281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-kolla-config\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.619320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.619360 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.619385 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzcq\" (UniqueName: \"kubernetes.io/projected/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-kube-api-access-6fzcq\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.620124 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-kolla-config\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.620650 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-config-data\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.623504 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.626057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.653603 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzcq\" (UniqueName: \"kubernetes.io/projected/9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4-kube-api-access-6fzcq\") pod \"memcached-0\" (UID: \"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4\") " pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.700518 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 14:32:28 crc kubenswrapper[4861]: I0219 14:32:28.930133 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 14:32:28 crc kubenswrapper[4861]: W0219 14:32:28.930731 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod083aff08_66c5_4e71_8f1e_6fabd56dab6c.slice/crio-aa5afffe586ac045a4c9cb77c8aad57b4088d7d19488a8ec5bf48b4d89a8d179 WatchSource:0}: Error finding container aa5afffe586ac045a4c9cb77c8aad57b4088d7d19488a8ec5bf48b4d89a8d179: Status 404 returned error can't find the container with id aa5afffe586ac045a4c9cb77c8aad57b4088d7d19488a8ec5bf48b4d89a8d179 Feb 19 14:32:29 crc kubenswrapper[4861]: I0219 14:32:29.040501 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"389dd318-a4b4-4dfc-b307-d6790470f8a1","Type":"ContainerStarted","Data":"ff310aceb7e30159ce835a20470dc20dd5991ab62302d30dfca34fe449cbdb77"} Feb 19 14:32:29 crc kubenswrapper[4861]: I0219 14:32:29.040555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"389dd318-a4b4-4dfc-b307-d6790470f8a1","Type":"ContainerStarted","Data":"2d3103413723e33b13f53199fcd0132b67a894b2a3c110c806d2e56176ee2e11"} Feb 19 14:32:29 crc kubenswrapper[4861]: I0219 14:32:29.045241 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"083aff08-66c5-4e71-8f1e-6fabd56dab6c","Type":"ContainerStarted","Data":"aa5afffe586ac045a4c9cb77c8aad57b4088d7d19488a8ec5bf48b4d89a8d179"} Feb 19 14:32:29 crc kubenswrapper[4861]: I0219 14:32:29.170083 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 14:32:30 crc kubenswrapper[4861]: I0219 14:32:30.056382 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4","Type":"ContainerStarted","Data":"b9a4a7202a3a1e8809d63a5d7c48a29136559acd164fdf73b5af41578b8d381d"} Feb 19 14:32:30 crc kubenswrapper[4861]: I0219 14:32:30.056871 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 14:32:30 crc kubenswrapper[4861]: I0219 14:32:30.056933 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4","Type":"ContainerStarted","Data":"c609236693170e61c1bf061f6e6d06fb4748528c3eac7237b6a582661f1b1a60"} Feb 19 14:32:30 crc kubenswrapper[4861]: I0219 14:32:30.058252 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"083aff08-66c5-4e71-8f1e-6fabd56dab6c","Type":"ContainerStarted","Data":"c637bdca2abcb2cf7e9d793ca74212992ae6b1f5d1c43cd79da3a41b61e047b0"} Feb 19 14:32:30 crc kubenswrapper[4861]: I0219 14:32:30.084628 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.084601429 podStartE2EDuration="2.084601429s" podCreationTimestamp="2026-02-19 14:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:32:30.078556556 +0000 UTC m=+4964.739659824" watchObservedRunningTime="2026-02-19 14:32:30.084601429 +0000 UTC m=+4964.745704677" Feb 19 14:32:33 crc kubenswrapper[4861]: I0219 14:32:33.081363 4861 generic.go:334] "Generic (PLEG): container finished" podID="389dd318-a4b4-4dfc-b307-d6790470f8a1" containerID="ff310aceb7e30159ce835a20470dc20dd5991ab62302d30dfca34fe449cbdb77" exitCode=0 Feb 19 14:32:33 crc kubenswrapper[4861]: I0219 14:32:33.081996 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"389dd318-a4b4-4dfc-b307-d6790470f8a1","Type":"ContainerDied","Data":"ff310aceb7e30159ce835a20470dc20dd5991ab62302d30dfca34fe449cbdb77"} Feb 19 14:32:34 crc kubenswrapper[4861]: I0219 14:32:34.095084 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"389dd318-a4b4-4dfc-b307-d6790470f8a1","Type":"ContainerStarted","Data":"d49072905735d78a75a59e562c445ae8c45fd7924cf1f6f542867907c0076cd7"} Feb 19 14:32:34 crc kubenswrapper[4861]: I0219 14:32:34.136897 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.136872818 podStartE2EDuration="9.136872818s" podCreationTimestamp="2026-02-19 14:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:32:34.128870812 +0000 UTC m=+4968.789974080" watchObservedRunningTime="2026-02-19 14:32:34.136872818 +0000 UTC m=+4968.797976056" Feb 19 14:32:34 crc kubenswrapper[4861]: I0219 14:32:34.310648 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:34 crc kubenswrapper[4861]: I0219 14:32:34.904646 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:32:34 crc kubenswrapper[4861]: I0219 14:32:34.967992 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d69655f7-fphtq"] Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.118598 4861 generic.go:334] "Generic (PLEG): container finished" podID="083aff08-66c5-4e71-8f1e-6fabd56dab6c" containerID="c637bdca2abcb2cf7e9d793ca74212992ae6b1f5d1c43cd79da3a41b61e047b0" exitCode=0 Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.118702 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"083aff08-66c5-4e71-8f1e-6fabd56dab6c","Type":"ContainerDied","Data":"c637bdca2abcb2cf7e9d793ca74212992ae6b1f5d1c43cd79da3a41b61e047b0"} Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.119300 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" podUID="030ee527-67d6-459d-8d66-e22229d02d08" containerName="dnsmasq-dns" containerID="cri-o://defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62" gracePeriod=10 Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.535343 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.627269 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-config\") pod \"030ee527-67d6-459d-8d66-e22229d02d08\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.627483 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-dns-svc\") pod \"030ee527-67d6-459d-8d66-e22229d02d08\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.627505 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfw8\" (UniqueName: \"kubernetes.io/projected/030ee527-67d6-459d-8d66-e22229d02d08-kube-api-access-7lfw8\") pod \"030ee527-67d6-459d-8d66-e22229d02d08\" (UID: \"030ee527-67d6-459d-8d66-e22229d02d08\") " Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.896064 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030ee527-67d6-459d-8d66-e22229d02d08-kube-api-access-7lfw8" (OuterVolumeSpecName: "kube-api-access-7lfw8") pod "030ee527-67d6-459d-8d66-e22229d02d08" (UID: "030ee527-67d6-459d-8d66-e22229d02d08"). InnerVolumeSpecName "kube-api-access-7lfw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:32:35 crc kubenswrapper[4861]: I0219 14:32:35.932220 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lfw8\" (UniqueName: \"kubernetes.io/projected/030ee527-67d6-459d-8d66-e22229d02d08-kube-api-access-7lfw8\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.044772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "030ee527-67d6-459d-8d66-e22229d02d08" (UID: "030ee527-67d6-459d-8d66-e22229d02d08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.050741 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-config" (OuterVolumeSpecName: "config") pod "030ee527-67d6-459d-8d66-e22229d02d08" (UID: "030ee527-67d6-459d-8d66-e22229d02d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.133919 4861 generic.go:334] "Generic (PLEG): container finished" podID="030ee527-67d6-459d-8d66-e22229d02d08" containerID="defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62" exitCode=0 Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.134003 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.134049 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" event={"ID":"030ee527-67d6-459d-8d66-e22229d02d08","Type":"ContainerDied","Data":"defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62"} Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.134087 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d69655f7-fphtq" event={"ID":"030ee527-67d6-459d-8d66-e22229d02d08","Type":"ContainerDied","Data":"795f8686982e2260d7aeb8fb3610f0b873f63b96cb753b716330fd08117c2b0d"} Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.134107 4861 scope.go:117] "RemoveContainer" containerID="defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.136104 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"083aff08-66c5-4e71-8f1e-6fabd56dab6c","Type":"ContainerStarted","Data":"5ff802600b81f9c783ba52ca9ee0908d46eefcad5a7f1efe058a4e7d41964070"} Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.138500 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.138666 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030ee527-67d6-459d-8d66-e22229d02d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.170261 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.170238302 podStartE2EDuration="9.170238302s" podCreationTimestamp="2026-02-19 14:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:32:36.165821833 +0000 UTC m=+4970.826925071" watchObservedRunningTime="2026-02-19 14:32:36.170238302 +0000 UTC m=+4970.831341530" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.181501 4861 scope.go:117] "RemoveContainer" containerID="7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.188547 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d69655f7-fphtq"] Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.200113 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9d69655f7-fphtq"] Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.206062 4861 scope.go:117] "RemoveContainer" containerID="defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62" Feb 19 14:32:36 crc kubenswrapper[4861]: E0219 14:32:36.206565 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62\": container with ID starting with defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62 not found: ID does not exist" containerID="defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.206619 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62"} err="failed to get container status \"defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62\": rpc error: code = NotFound desc = could not find container \"defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62\": container with ID starting with defaf7977f00c255286daa754819bb04877f4ce7e443f5038ed64d1efd5d3d62 not found: ID does not exist" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.206642 4861 scope.go:117] "RemoveContainer" containerID="7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb" Feb 19 14:32:36 crc kubenswrapper[4861]: E0219 14:32:36.207038 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb\": container with ID starting with 7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb not found: ID does not exist" containerID="7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb" Feb 19 14:32:36 crc kubenswrapper[4861]: I0219 14:32:36.207172 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb"} err="failed to get container status \"7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb\": rpc error: code = NotFound desc = could not find container \"7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb\": container with ID starting with 7ef439ca3ed6242585cfc76d498cafe938cbd492c7961b4e10391a8f7e7476cb not found: ID does not exist" Feb 19 14:32:37 crc kubenswrapper[4861]: I0219 14:32:37.424367 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 14:32:37 crc kubenswrapper[4861]: I0219 14:32:37.424759 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 14:32:37 crc kubenswrapper[4861]: I0219 14:32:37.828740 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 14:32:37 crc kubenswrapper[4861]: I0219 14:32:37.996725 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030ee527-67d6-459d-8d66-e22229d02d08" path="/var/lib/kubelet/pods/030ee527-67d6-459d-8d66-e22229d02d08/volumes" Feb 19 14:32:38 crc kubenswrapper[4861]: E0219 14:32:38.237012 4861 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.177:60634->38.102.83.177:44601: read tcp 38.102.83.177:60634->38.102.83.177:44601: read: connection reset by peer Feb 19 14:32:38 crc kubenswrapper[4861]: I0219 14:32:38.291182 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 14:32:38 crc kubenswrapper[4861]: I0219 14:32:38.491496 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:38 crc kubenswrapper[4861]: I0219 14:32:38.491667 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:38 crc kubenswrapper[4861]: I0219 14:32:38.701700 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 14:32:40 crc kubenswrapper[4861]: I0219 14:32:40.895243 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:41 crc kubenswrapper[4861]: I0219 14:32:41.025586 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.491191 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t97jb"] Feb 19 14:32:45 crc kubenswrapper[4861]: E0219 14:32:45.491573 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030ee527-67d6-459d-8d66-e22229d02d08" containerName="dnsmasq-dns" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.491587 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="030ee527-67d6-459d-8d66-e22229d02d08" containerName="dnsmasq-dns" Feb 19 14:32:45 crc kubenswrapper[4861]: E0219 14:32:45.491615 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030ee527-67d6-459d-8d66-e22229d02d08" containerName="init" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.491622 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="030ee527-67d6-459d-8d66-e22229d02d08" containerName="init" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.491794 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="030ee527-67d6-459d-8d66-e22229d02d08" containerName="dnsmasq-dns" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.492364 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.495181 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.509858 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t97jb"] Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.605197 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24f5de12-a15f-4230-829a-cd69a8793adb-operator-scripts\") pod \"root-account-create-update-t97jb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.605269 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9h6\" (UniqueName: \"kubernetes.io/projected/24f5de12-a15f-4230-829a-cd69a8793adb-kube-api-access-6l9h6\") pod \"root-account-create-update-t97jb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.705996 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l9h6\" (UniqueName: \"kubernetes.io/projected/24f5de12-a15f-4230-829a-cd69a8793adb-kube-api-access-6l9h6\") pod \"root-account-create-update-t97jb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.706338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24f5de12-a15f-4230-829a-cd69a8793adb-operator-scripts\") pod \"root-account-create-update-t97jb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.706997 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24f5de12-a15f-4230-829a-cd69a8793adb-operator-scripts\") pod \"root-account-create-update-t97jb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.724654 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l9h6\" (UniqueName: \"kubernetes.io/projected/24f5de12-a15f-4230-829a-cd69a8793adb-kube-api-access-6l9h6\") pod \"root-account-create-update-t97jb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:45 crc kubenswrapper[4861]: I0219 14:32:45.811311 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:46 crc kubenswrapper[4861]: I0219 14:32:46.272150 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t97jb"] Feb 19 14:32:46 crc kubenswrapper[4861]: W0219 14:32:46.603736 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24f5de12_a15f_4230_829a_cd69a8793adb.slice/crio-3da416fd16afa5523d86c366efea1d2a07b9c9920739af5e4bacd50d8e60b15f WatchSource:0}: Error finding container 3da416fd16afa5523d86c366efea1d2a07b9c9920739af5e4bacd50d8e60b15f: Status 404 returned error can't find the container with id 3da416fd16afa5523d86c366efea1d2a07b9c9920739af5e4bacd50d8e60b15f Feb 19 14:32:46 crc kubenswrapper[4861]: I0219 14:32:46.610928 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 14:32:47 crc kubenswrapper[4861]: I0219 14:32:47.250948 4861 generic.go:334] "Generic (PLEG): container finished" podID="24f5de12-a15f-4230-829a-cd69a8793adb" containerID="c45290ada5c3afcf171e042cbb0899403cb495f6e88106a9d8a87de6882797a7" exitCode=0 Feb 19 14:32:47 crc kubenswrapper[4861]: I0219 14:32:47.251100 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t97jb" event={"ID":"24f5de12-a15f-4230-829a-cd69a8793adb","Type":"ContainerDied","Data":"c45290ada5c3afcf171e042cbb0899403cb495f6e88106a9d8a87de6882797a7"} Feb 19 14:32:47 crc kubenswrapper[4861]: I0219 14:32:47.251515 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t97jb" event={"ID":"24f5de12-a15f-4230-829a-cd69a8793adb","Type":"ContainerStarted","Data":"3da416fd16afa5523d86c366efea1d2a07b9c9920739af5e4bacd50d8e60b15f"} Feb 19 14:32:48 crc kubenswrapper[4861]: I0219 14:32:48.964261 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.068619 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24f5de12-a15f-4230-829a-cd69a8793adb-operator-scripts\") pod \"24f5de12-a15f-4230-829a-cd69a8793adb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.068935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l9h6\" (UniqueName: \"kubernetes.io/projected/24f5de12-a15f-4230-829a-cd69a8793adb-kube-api-access-6l9h6\") pod \"24f5de12-a15f-4230-829a-cd69a8793adb\" (UID: \"24f5de12-a15f-4230-829a-cd69a8793adb\") " Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.069801 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f5de12-a15f-4230-829a-cd69a8793adb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24f5de12-a15f-4230-829a-cd69a8793adb" (UID: "24f5de12-a15f-4230-829a-cd69a8793adb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.081777 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f5de12-a15f-4230-829a-cd69a8793adb-kube-api-access-6l9h6" (OuterVolumeSpecName: "kube-api-access-6l9h6") pod "24f5de12-a15f-4230-829a-cd69a8793adb" (UID: "24f5de12-a15f-4230-829a-cd69a8793adb"). InnerVolumeSpecName "kube-api-access-6l9h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.171292 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24f5de12-a15f-4230-829a-cd69a8793adb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.171344 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l9h6\" (UniqueName: \"kubernetes.io/projected/24f5de12-a15f-4230-829a-cd69a8793adb-kube-api-access-6l9h6\") on node \"crc\" DevicePath \"\"" Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.273411 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t97jb" event={"ID":"24f5de12-a15f-4230-829a-cd69a8793adb","Type":"ContainerDied","Data":"3da416fd16afa5523d86c366efea1d2a07b9c9920739af5e4bacd50d8e60b15f"} Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.273490 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da416fd16afa5523d86c366efea1d2a07b9c9920739af5e4bacd50d8e60b15f" Feb 19 14:32:49 crc kubenswrapper[4861]: I0219 14:32:49.273491 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t97jb" Feb 19 14:32:52 crc kubenswrapper[4861]: I0219 14:32:52.117253 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t97jb"] Feb 19 14:32:52 crc kubenswrapper[4861]: I0219 14:32:52.135076 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t97jb"] Feb 19 14:32:53 crc kubenswrapper[4861]: I0219 14:32:53.994268 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f5de12-a15f-4230-829a-cd69a8793adb" path="/var/lib/kubelet/pods/24f5de12-a15f-4230-829a-cd69a8793adb/volumes" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.123457 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lqkrc"] Feb 19 14:32:57 crc kubenswrapper[4861]: E0219 14:32:57.126269 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f5de12-a15f-4230-829a-cd69a8793adb" containerName="mariadb-account-create-update" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.126555 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f5de12-a15f-4230-829a-cd69a8793adb" containerName="mariadb-account-create-update" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.127159 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f5de12-a15f-4230-829a-cd69a8793adb" containerName="mariadb-account-create-update" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.128405 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.131522 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.136100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lqkrc"] Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.229391 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1994ba6-a5ca-410a-a998-e82972a07ecd-operator-scripts\") pod \"root-account-create-update-lqkrc\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.229726 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmz5f\" (UniqueName: \"kubernetes.io/projected/c1994ba6-a5ca-410a-a998-e82972a07ecd-kube-api-access-lmz5f\") pod \"root-account-create-update-lqkrc\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.331573 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1994ba6-a5ca-410a-a998-e82972a07ecd-operator-scripts\") pod \"root-account-create-update-lqkrc\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.331655 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmz5f\" (UniqueName: \"kubernetes.io/projected/c1994ba6-a5ca-410a-a998-e82972a07ecd-kube-api-access-lmz5f\") pod \"root-account-create-update-lqkrc\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.332522 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1994ba6-a5ca-410a-a998-e82972a07ecd-operator-scripts\") pod \"root-account-create-update-lqkrc\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.371919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmz5f\" (UniqueName: \"kubernetes.io/projected/c1994ba6-a5ca-410a-a998-e82972a07ecd-kube-api-access-lmz5f\") pod \"root-account-create-update-lqkrc\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.453393 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqkrc" Feb 19 14:32:57 crc kubenswrapper[4861]: I0219 14:32:57.997594 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lqkrc"] Feb 19 14:32:58 crc kubenswrapper[4861]: I0219 14:32:58.366405 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqkrc" event={"ID":"c1994ba6-a5ca-410a-a998-e82972a07ecd","Type":"ContainerStarted","Data":"7758324d50f17937a95fa5dd794d47d161d0b3c5bba9c0ca7a12e73f10c62c1c"} Feb 19 14:32:58 crc kubenswrapper[4861]: I0219 14:32:58.366945 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqkrc" event={"ID":"c1994ba6-a5ca-410a-a998-e82972a07ecd","Type":"ContainerStarted","Data":"48c40bc90178395c3203d1d4b8d9f5c3708b9f46fc7691f6e187be5fda7bb6cd"} Feb 19 14:32:58 crc kubenswrapper[4861]: I0219 14:32:58.397244 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-lqkrc" podStartSLOduration=1.397219997 podStartE2EDuration="1.397219997s" podCreationTimestamp="2026-02-19 14:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:32:58.388726338 +0000 UTC m=+4993.049829606" watchObservedRunningTime="2026-02-19 14:32:58.397219997 +0000 UTC m=+4993.058323255" Feb 19 14:32:59 crc kubenswrapper[4861]: I0219 14:32:59.379332 4861 generic.go:334] "Generic (PLEG): container finished" podID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerID="727e343c0d53746bd08e9f5e1ac9a94f5e0aef7c43edc91aa9064806a6684d9b" exitCode=0 Feb 19 14:32:59 crc kubenswrapper[4861]: I0219 14:32:59.379384 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f6277b46-234e-41fc-a3aa-12a8c020111f","Type":"ContainerDied","Data":"727e343c0d53746bd08e9f5e1ac9a94f5e0aef7c43edc91aa9064806a6684d9b"} Feb 19 14:32:59 crc kubenswrapper[4861]: I0219 14:32:59.382356 4861 generic.go:334] "Generic (PLEG): container finished" podID="c1994ba6-a5ca-410a-a998-e82972a07ecd" containerID="7758324d50f17937a95fa5dd794d47d161d0b3c5bba9c0ca7a12e73f10c62c1c" exitCode=0 Feb 19 14:32:59 crc kubenswrapper[4861]: I0219 14:32:59.382401 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqkrc" event={"ID":"c1994ba6-a5ca-410a-a998-e82972a07ecd","Type":"ContainerDied","Data":"7758324d50f17937a95fa5dd794d47d161d0b3c5bba9c0ca7a12e73f10c62c1c"} Feb 19 14:33:00 crc kubenswrapper[4861]: I0219 14:33:00.392914 4861 generic.go:334] "Generic (PLEG): container finished" podID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerID="97fd45c46f613bd34f1e8dedb094888097899058df1093ed1b8a421f0370434c" exitCode=0 Feb 19 14:33:00 crc kubenswrapper[4861]: I0219 14:33:00.393066 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f34ab3e0-1b78-4407-9a06-a61c65390b14","Type":"ContainerDied","Data":"97fd45c46f613bd34f1e8dedb094888097899058df1093ed1b8a421f0370434c"} Feb 19 14:33:00 crc kubenswrapper[4861]: I0219 14:33:00.397626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f6277b46-234e-41fc-a3aa-12a8c020111f","Type":"ContainerStarted","Data":"24b85095894d2efd9fd70cb000a7671765df5908ba42abc18ceab90dd41c81c7"} Feb 19 14:33:00 crc kubenswrapper[4861]: I0219 14:33:00.398107 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:00 crc kubenswrapper[4861]: I0219 14:33:00.467501 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.467480886 podStartE2EDuration="36.467480886s" podCreationTimestamp="2026-02-19 14:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:33:00.465947515 +0000 UTC m=+4995.127050743" watchObservedRunningTime="2026-02-19 14:33:00.467480886 +0000 UTC m=+4995.128584114" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.411724 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f34ab3e0-1b78-4407-9a06-a61c65390b14","Type":"ContainerStarted","Data":"b87265a5b453bc07c2d5ff0d669dfbab48f34876d0b306c7cb02ef2ff949eb08"} Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.412407 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.413647 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqkrc" event={"ID":"c1994ba6-a5ca-410a-a998-e82972a07ecd","Type":"ContainerDied","Data":"48c40bc90178395c3203d1d4b8d9f5c3708b9f46fc7691f6e187be5fda7bb6cd"} Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.413724 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c40bc90178395c3203d1d4b8d9f5c3708b9f46fc7691f6e187be5fda7bb6cd" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.450046 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.449888054 podStartE2EDuration="37.449888054s" podCreationTimestamp="2026-02-19 14:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:33:01.438794895 +0000 UTC m=+4996.099898153" watchObservedRunningTime="2026-02-19 14:33:01.449888054 +0000 UTC m=+4996.110991342" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.555192 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqkrc" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.706507 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1994ba6-a5ca-410a-a998-e82972a07ecd-operator-scripts\") pod \"c1994ba6-a5ca-410a-a998-e82972a07ecd\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.706737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmz5f\" (UniqueName: \"kubernetes.io/projected/c1994ba6-a5ca-410a-a998-e82972a07ecd-kube-api-access-lmz5f\") pod \"c1994ba6-a5ca-410a-a998-e82972a07ecd\" (UID: \"c1994ba6-a5ca-410a-a998-e82972a07ecd\") " Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.707151 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1994ba6-a5ca-410a-a998-e82972a07ecd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1994ba6-a5ca-410a-a998-e82972a07ecd" (UID: "c1994ba6-a5ca-410a-a998-e82972a07ecd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.713862 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1994ba6-a5ca-410a-a998-e82972a07ecd-kube-api-access-lmz5f" (OuterVolumeSpecName: "kube-api-access-lmz5f") pod "c1994ba6-a5ca-410a-a998-e82972a07ecd" (UID: "c1994ba6-a5ca-410a-a998-e82972a07ecd"). InnerVolumeSpecName "kube-api-access-lmz5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.808709 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmz5f\" (UniqueName: \"kubernetes.io/projected/c1994ba6-a5ca-410a-a998-e82972a07ecd-kube-api-access-lmz5f\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:01 crc kubenswrapper[4861]: I0219 14:33:01.808749 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1994ba6-a5ca-410a-a998-e82972a07ecd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:02 crc kubenswrapper[4861]: I0219 14:33:02.421694 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqkrc" Feb 19 14:33:15 crc kubenswrapper[4861]: I0219 14:33:15.425656 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:16 crc kubenswrapper[4861]: I0219 14:33:16.093941 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.127201 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-kpx92"] Feb 19 14:33:20 crc kubenswrapper[4861]: E0219 14:33:20.128024 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1994ba6-a5ca-410a-a998-e82972a07ecd" containerName="mariadb-account-create-update" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.128046 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1994ba6-a5ca-410a-a998-e82972a07ecd" containerName="mariadb-account-create-update" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.128294 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1994ba6-a5ca-410a-a998-e82972a07ecd" containerName="mariadb-account-create-update" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.129518 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.151981 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-kpx92"] Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.235033 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.235112 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9r4\" (UniqueName: \"kubernetes.io/projected/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-kube-api-access-fg9r4\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.235244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-config\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.336873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.337289 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg9r4\" (UniqueName: \"kubernetes.io/projected/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-kube-api-access-fg9r4\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.337326 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-config\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.338185 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.338252 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-config\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.372435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg9r4\" (UniqueName: \"kubernetes.io/projected/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-kube-api-access-fg9r4\") pod \"dnsmasq-dns-54dc9c94cc-kpx92\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.508462 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:20 crc kubenswrapper[4861]: I0219 14:33:20.888893 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:33:21 crc kubenswrapper[4861]: I0219 14:33:21.031571 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-kpx92"] Feb 19 14:33:21 crc kubenswrapper[4861]: I0219 14:33:21.532700 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:33:21 crc kubenswrapper[4861]: I0219 14:33:21.616726 4861 generic.go:334] "Generic (PLEG): container finished" podID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerID="1683b3b0813a65a16b6d3e3b0923bdd441cff0fde33ab158846432ca2d0f7f8d" exitCode=0 Feb 19 14:33:21 crc kubenswrapper[4861]: I0219 14:33:21.616775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" event={"ID":"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c","Type":"ContainerDied","Data":"1683b3b0813a65a16b6d3e3b0923bdd441cff0fde33ab158846432ca2d0f7f8d"} Feb 19 14:33:21 crc kubenswrapper[4861]: I0219 14:33:21.616804 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" event={"ID":"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c","Type":"ContainerStarted","Data":"f662e7ecc12fb4bc9869e77212ab18d3232c62934da0b20d99c0b6ee28bc8fd4"} Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.625626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" event={"ID":"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c","Type":"ContainerStarted","Data":"6737d4f979f22de964de4ff8043abf584cc250047f2ffb9e8da565bfc46f9561"} Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.626234 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.661819 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" podStartSLOduration=2.66179406 podStartE2EDuration="2.66179406s" podCreationTimestamp="2026-02-19 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:33:22.658762728 +0000 UTC m=+5017.319865956" watchObservedRunningTime="2026-02-19 14:33:22.66179406 +0000 UTC m=+5017.322897318" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.688004 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-66wxr"] Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.689806 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.701278 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66wxr"] Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.781463 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncd5\" (UniqueName: \"kubernetes.io/projected/12fd7701-b67d-481e-952e-dc3acef69fc2-kube-api-access-8ncd5\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.781557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-utilities\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.781636 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-catalog-content\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.883092 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-catalog-content\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.883265 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ncd5\" (UniqueName: \"kubernetes.io/projected/12fd7701-b67d-481e-952e-dc3acef69fc2-kube-api-access-8ncd5\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.883329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-utilities\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.883669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-catalog-content\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.884020 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-utilities\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:22 crc kubenswrapper[4861]: I0219 14:33:22.905640 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ncd5\" (UniqueName: \"kubernetes.io/projected/12fd7701-b67d-481e-952e-dc3acef69fc2-kube-api-access-8ncd5\") pod \"community-operators-66wxr\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:23 crc kubenswrapper[4861]: I0219 14:33:23.006384 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:23 crc kubenswrapper[4861]: I0219 14:33:23.553438 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66wxr"] Feb 19 14:33:23 crc kubenswrapper[4861]: I0219 14:33:23.632144 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerStarted","Data":"7818370d6f6c69707cedf60f54a5b2098574507b400de607644cfff15d4572f4"} Feb 19 14:33:24 crc kubenswrapper[4861]: I0219 14:33:24.643638 4861 generic.go:334] "Generic (PLEG): container finished" podID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerID="81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f" exitCode=0 Feb 19 14:33:24 crc kubenswrapper[4861]: I0219 14:33:24.643750 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerDied","Data":"81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f"} Feb 19 14:33:24 crc kubenswrapper[4861]: I0219 14:33:24.645837 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:33:25 crc kubenswrapper[4861]: I0219 14:33:25.050783 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="rabbitmq" containerID="cri-o://b87265a5b453bc07c2d5ff0d669dfbab48f34876d0b306c7cb02ef2ff949eb08" gracePeriod=604796 Feb 19 14:33:25 crc kubenswrapper[4861]: I0219 14:33:25.611304 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerName="rabbitmq" containerID="cri-o://24b85095894d2efd9fd70cb000a7671765df5908ba42abc18ceab90dd41c81c7" gracePeriod=604796 Feb 19 14:33:25 crc kubenswrapper[4861]: I0219 14:33:25.654261 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerStarted","Data":"618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf"} Feb 19 14:33:26 crc kubenswrapper[4861]: I0219 14:33:26.090951 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.255:5671: connect: connection refused" Feb 19 14:33:26 crc kubenswrapper[4861]: I0219 14:33:26.665924 4861 generic.go:334] "Generic (PLEG): container finished" podID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerID="618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf" exitCode=0 Feb 19 14:33:26 crc kubenswrapper[4861]: I0219 14:33:26.666032 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerDied","Data":"618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf"} Feb 19 14:33:27 crc kubenswrapper[4861]: I0219 14:33:27.674607 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerStarted","Data":"c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0"} Feb 19 14:33:30 crc kubenswrapper[4861]: I0219 14:33:30.510681 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:33:30 crc kubenswrapper[4861]: I0219 14:33:30.541312 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-66wxr" podStartSLOduration=6.058739841 podStartE2EDuration="8.541278322s" podCreationTimestamp="2026-02-19 14:33:22 +0000 UTC" firstStartedPulling="2026-02-19 14:33:24.645504576 +0000 UTC m=+5019.306607804" lastFinishedPulling="2026-02-19 14:33:27.128043027 +0000 UTC m=+5021.789146285" observedRunningTime="2026-02-19 14:33:27.701927814 +0000 UTC m=+5022.363031042" watchObservedRunningTime="2026-02-19 14:33:30.541278322 +0000 UTC m=+5025.202381600" Feb 19 14:33:30 crc kubenswrapper[4861]: I0219 14:33:30.573467 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-jhlrf"] Feb 19 14:33:30 crc kubenswrapper[4861]: I0219 14:33:30.573820 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerName="dnsmasq-dns" containerID="cri-o://1824b0a482586c59201474fd2f539a2c3cf1d060b6145e73e011320f1b1e8d1f" gracePeriod=10 Feb 19 14:33:30 crc kubenswrapper[4861]: I0219 14:33:30.705232 4861 generic.go:334] "Generic (PLEG): container finished" podID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerID="1824b0a482586c59201474fd2f539a2c3cf1d060b6145e73e011320f1b1e8d1f" exitCode=0 Feb 19 14:33:30 crc kubenswrapper[4861]: I0219 14:33:30.705298 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" event={"ID":"45909a85-8b28-4c9e-bc14-4b5f0256792f","Type":"ContainerDied","Data":"1824b0a482586c59201474fd2f539a2c3cf1d060b6145e73e011320f1b1e8d1f"} Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.395672 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.527294 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-dns-svc\") pod \"45909a85-8b28-4c9e-bc14-4b5f0256792f\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.527449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-config\") pod \"45909a85-8b28-4c9e-bc14-4b5f0256792f\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.527487 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74n6k\" (UniqueName: \"kubernetes.io/projected/45909a85-8b28-4c9e-bc14-4b5f0256792f-kube-api-access-74n6k\") pod \"45909a85-8b28-4c9e-bc14-4b5f0256792f\" (UID: \"45909a85-8b28-4c9e-bc14-4b5f0256792f\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.559101 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45909a85-8b28-4c9e-bc14-4b5f0256792f-kube-api-access-74n6k" (OuterVolumeSpecName: "kube-api-access-74n6k") pod "45909a85-8b28-4c9e-bc14-4b5f0256792f" (UID: "45909a85-8b28-4c9e-bc14-4b5f0256792f"). InnerVolumeSpecName "kube-api-access-74n6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.580523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45909a85-8b28-4c9e-bc14-4b5f0256792f" (UID: "45909a85-8b28-4c9e-bc14-4b5f0256792f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.594550 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-config" (OuterVolumeSpecName: "config") pod "45909a85-8b28-4c9e-bc14-4b5f0256792f" (UID: "45909a85-8b28-4c9e-bc14-4b5f0256792f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.629576 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.629620 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74n6k\" (UniqueName: \"kubernetes.io/projected/45909a85-8b28-4c9e-bc14-4b5f0256792f-kube-api-access-74n6k\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.629634 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45909a85-8b28-4c9e-bc14-4b5f0256792f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.745902 4861 generic.go:334] "Generic (PLEG): container finished" podID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerID="b87265a5b453bc07c2d5ff0d669dfbab48f34876d0b306c7cb02ef2ff949eb08" exitCode=0 Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.745991 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f34ab3e0-1b78-4407-9a06-a61c65390b14","Type":"ContainerDied","Data":"b87265a5b453bc07c2d5ff0d669dfbab48f34876d0b306c7cb02ef2ff949eb08"} Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.754877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" event={"ID":"45909a85-8b28-4c9e-bc14-4b5f0256792f","Type":"ContainerDied","Data":"1d26e356fbf1a28f4468638fee84949aaf5926c155163dc7374cdacc6c3e658a"} Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.754951 4861 scope.go:117] "RemoveContainer" containerID="1824b0a482586c59201474fd2f539a2c3cf1d060b6145e73e011320f1b1e8d1f" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.755122 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-jhlrf" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.822594 4861 scope.go:117] "RemoveContainer" containerID="0986ef1f08ebad6551e555cefe59c5e1e31bc12794c1d16aaca19d38ef8b9c63" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.916026 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.927216 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-jhlrf"] Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.932812 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-jhlrf"] Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935521 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-plugins\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935728 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-tls\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935785 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34ab3e0-1b78-4407-9a06-a61c65390b14-erlang-cookie-secret\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935819 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34ab3e0-1b78-4407-9a06-a61c65390b14-pod-info\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-confd\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935923 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-erlang-cookie\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.935945 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-plugins-conf\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.936004 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-config-data\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.936062 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-server-conf\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.936087 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp79k\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-kube-api-access-vp79k\") pod \"f34ab3e0-1b78-4407-9a06-a61c65390b14\" (UID: \"f34ab3e0-1b78-4407-9a06-a61c65390b14\") " Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.936823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.937236 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.937856 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.939507 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f34ab3e0-1b78-4407-9a06-a61c65390b14-pod-info" (OuterVolumeSpecName: "pod-info") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.940633 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34ab3e0-1b78-4407-9a06-a61c65390b14-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.941397 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-kube-api-access-vp79k" (OuterVolumeSpecName: "kube-api-access-vp79k") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "kube-api-access-vp79k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.941514 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.954562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b" (OuterVolumeSpecName: "persistence") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "pvc-2bdad6c3-0e83-436c-9061-ca554343e44b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.965084 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-config-data" (OuterVolumeSpecName: "config-data") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:31 crc kubenswrapper[4861]: I0219 14:33:31.982924 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-server-conf" (OuterVolumeSpecName: "server-conf") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.006687 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" path="/var/lib/kubelet/pods/45909a85-8b28-4c9e-bc14-4b5f0256792f/volumes" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037321 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34ab3e0-1b78-4407-9a06-a61c65390b14-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037353 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34ab3e0-1b78-4407-9a06-a61c65390b14-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037362 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037372 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037379 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037387 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34ab3e0-1b78-4407-9a06-a61c65390b14-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037395 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp79k\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-kube-api-access-vp79k\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037404 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037444 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") on node \"crc\" " Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.037454 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.045077 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f34ab3e0-1b78-4407-9a06-a61c65390b14" (UID: "f34ab3e0-1b78-4407-9a06-a61c65390b14"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.052772 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.052886 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2bdad6c3-0e83-436c-9061-ca554343e44b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b") on node "crc" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.138762 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.138800 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34ab3e0-1b78-4407-9a06-a61c65390b14-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.765052 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f34ab3e0-1b78-4407-9a06-a61c65390b14","Type":"ContainerDied","Data":"fc475de29d52812c65d81ce3820fd058d9753e9b8f98807600d5e45cb0d906a4"} Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.765320 4861 scope.go:117] "RemoveContainer" containerID="b87265a5b453bc07c2d5ff0d669dfbab48f34876d0b306c7cb02ef2ff949eb08" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.765410 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.771970 4861 generic.go:334] "Generic (PLEG): container finished" podID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerID="24b85095894d2efd9fd70cb000a7671765df5908ba42abc18ceab90dd41c81c7" exitCode=0 Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.772069 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f6277b46-234e-41fc-a3aa-12a8c020111f","Type":"ContainerDied","Data":"24b85095894d2efd9fd70cb000a7671765df5908ba42abc18ceab90dd41c81c7"} Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.833401 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.841549 4861 scope.go:117] "RemoveContainer" containerID="97fd45c46f613bd34f1e8dedb094888097899058df1093ed1b8a421f0370434c" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.853106 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.864109 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:33:32 crc kubenswrapper[4861]: E0219 14:33:32.865037 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="rabbitmq" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.865061 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="rabbitmq" Feb 19 14:33:32 crc kubenswrapper[4861]: E0219 14:33:32.865084 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerName="init" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.865094 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerName="init" Feb 19 14:33:32 crc kubenswrapper[4861]: E0219 14:33:32.865107 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerName="dnsmasq-dns" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.865116 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerName="dnsmasq-dns" Feb 19 14:33:32 crc kubenswrapper[4861]: E0219 14:33:32.865131 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="setup-container" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.865138 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="setup-container" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.865303 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="45909a85-8b28-4c9e-bc14-4b5f0256792f" containerName="dnsmasq-dns" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.865323 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" containerName="rabbitmq" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.867570 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.873702 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.874016 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.874171 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.874412 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mcptc" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.874593 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.874754 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.877947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.883614 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952186 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952235 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-config-data\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952317 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952372 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh66\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-kube-api-access-svh66\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952430 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952480 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1dd55ddf-97da-4b63-9239-1d5c18a70b92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952508 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1dd55ddf-97da-4b63-9239-1d5c18a70b92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952544 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952575 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:32 crc kubenswrapper[4861]: I0219 14:33:32.952609 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.007153 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.007194 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.038949 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.053824 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-server-conf\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.053865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-confd\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.053912 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-config-data\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.053939 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6277b46-234e-41fc-a3aa-12a8c020111f-pod-info\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.053964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-plugins\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054010 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psgcd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-kube-api-access-psgcd\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054048 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-erlang-cookie\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054109 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6277b46-234e-41fc-a3aa-12a8c020111f-erlang-cookie-secret\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054142 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-plugins-conf\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054225 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054251 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-tls\") pod \"f6277b46-234e-41fc-a3aa-12a8c020111f\" (UID: \"f6277b46-234e-41fc-a3aa-12a8c020111f\") " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054447 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054477 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054516 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1dd55ddf-97da-4b63-9239-1d5c18a70b92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054538 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054555 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1dd55ddf-97da-4b63-9239-1d5c18a70b92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054618 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054665 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054697 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-config-data\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054747 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh66\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-kube-api-access-svh66\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054745 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054796 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.054960 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.056352 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.056746 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.056180 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.059214 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6277b46-234e-41fc-a3aa-12a8c020111f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.059322 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f6277b46-234e-41fc-a3aa-12a8c020111f-pod-info" (OuterVolumeSpecName: "pod-info") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.060170 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.060374 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-kube-api-access-psgcd" (OuterVolumeSpecName: "kube-api-access-psgcd") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "kube-api-access-psgcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.062915 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.063788 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.064062 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.064101 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/963be3595212dd7ba9de40643baf551b75ffee4393dbed6f0bf088e5f6bd3faf/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.069644 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.074804 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd55ddf-97da-4b63-9239-1d5c18a70b92-config-data\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.081847 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521" (OuterVolumeSpecName: "persistence") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "pvc-003d72e1-03bb-4405-9c14-caab9d3ef521". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.082547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh66\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-kube-api-access-svh66\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.098473 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1dd55ddf-97da-4b63-9239-1d5c18a70b92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.112451 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1dd55ddf-97da-4b63-9239-1d5c18a70b92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.112946 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1dd55ddf-97da-4b63-9239-1d5c18a70b92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.155755 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-config-data" (OuterVolumeSpecName: "config-data") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.162726 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bdad6c3-0e83-436c-9061-ca554343e44b\") pod \"rabbitmq-server-0\" (UID: \"1dd55ddf-97da-4b63-9239-1d5c18a70b92\") " pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165838 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165883 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6277b46-234e-41fc-a3aa-12a8c020111f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165931 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psgcd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-kube-api-access-psgcd\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165947 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165956 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6277b46-234e-41fc-a3aa-12a8c020111f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165965 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165988 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") on node \"crc\" " Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.165999 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.168064 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-server-conf" (OuterVolumeSpecName: "server-conf") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.182113 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.182317 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-003d72e1-03bb-4405-9c14-caab9d3ef521" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521") on node "crc" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.204408 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.219101 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f6277b46-234e-41fc-a3aa-12a8c020111f" (UID: "f6277b46-234e-41fc-a3aa-12a8c020111f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.267467 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6277b46-234e-41fc-a3aa-12a8c020111f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.267505 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6277b46-234e-41fc-a3aa-12a8c020111f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.267522 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.640646 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.784291 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1dd55ddf-97da-4b63-9239-1d5c18a70b92","Type":"ContainerStarted","Data":"6707436314ad8d25e83e57f220065bd8fe589c5fab66bd130321ef01508b30db"} Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.787533 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f6277b46-234e-41fc-a3aa-12a8c020111f","Type":"ContainerDied","Data":"4da0bf27f0fbd482c905dd4ebf1e10f5f283822a785fdebfe8593a90d885a8bc"} Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.787578 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.787584 4861 scope.go:117] "RemoveContainer" containerID="24b85095894d2efd9fd70cb000a7671765df5908ba42abc18ceab90dd41c81c7" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.814294 4861 scope.go:117] "RemoveContainer" containerID="727e343c0d53746bd08e9f5e1ac9a94f5e0aef7c43edc91aa9064806a6684d9b" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.829843 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.834270 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.834333 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.840948 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.861960 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:33:33 crc kubenswrapper[4861]: E0219 14:33:33.862380 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerName="setup-container" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.862411 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerName="setup-container" Feb 19 14:33:33 crc kubenswrapper[4861]: E0219 14:33:33.862455 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerName="rabbitmq" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.862470 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerName="rabbitmq" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.862746 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" containerName="rabbitmq" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.863753 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.866114 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.866219 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.866534 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.866771 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.867600 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.867820 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7bhvp" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.868002 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.875967 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de081c0e-da11-4fbd-a24c-f38f6800df56-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876013 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de081c0e-da11-4fbd-a24c-f38f6800df56-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876048 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876088 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvb7\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-kube-api-access-wgvb7\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876138 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876186 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876208 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876247 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876344 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876388 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.876439 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.899478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.901227 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.952586 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66wxr"] Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977601 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de081c0e-da11-4fbd-a24c-f38f6800df56-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977653 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de081c0e-da11-4fbd-a24c-f38f6800df56-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977686 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvb7\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-kube-api-access-wgvb7\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977746 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977831 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977870 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977953 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.977983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.978013 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.979665 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.980788 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.980835 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4cc59c36db607d97947b675ec04df78c135f680d7e2e88615f93dc59e7ee1dac/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.981294 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de081c0e-da11-4fbd-a24c-f38f6800df56-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.981301 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de081c0e-da11-4fbd-a24c-f38f6800df56-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.981969 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.982832 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.985412 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.985779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de081c0e-da11-4fbd-a24c-f38f6800df56-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.987193 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34ab3e0-1b78-4407-9a06-a61c65390b14" path="/var/lib/kubelet/pods/f34ab3e0-1b78-4407-9a06-a61c65390b14/volumes" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.987664 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.987929 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6277b46-234e-41fc-a3aa-12a8c020111f" path="/var/lib/kubelet/pods/f6277b46-234e-41fc-a3aa-12a8c020111f/volumes" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.988212 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:33 crc kubenswrapper[4861]: I0219 14:33:33.999862 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvb7\" (UniqueName: \"kubernetes.io/projected/de081c0e-da11-4fbd-a24c-f38f6800df56-kube-api-access-wgvb7\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:34 crc kubenswrapper[4861]: I0219 14:33:34.013834 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-003d72e1-03bb-4405-9c14-caab9d3ef521\") pod \"rabbitmq-cell1-server-0\" (UID: \"de081c0e-da11-4fbd-a24c-f38f6800df56\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:34 crc kubenswrapper[4861]: I0219 14:33:34.192886 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:33:34 crc kubenswrapper[4861]: I0219 14:33:34.762683 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 14:33:34 crc kubenswrapper[4861]: I0219 14:33:34.798502 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de081c0e-da11-4fbd-a24c-f38f6800df56","Type":"ContainerStarted","Data":"4e8e66f8d65cbe413952ffcb49d87ee40767224870dc92fb50f27247fb2bde74"} Feb 19 14:33:35 crc kubenswrapper[4861]: I0219 14:33:35.805487 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-66wxr" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="registry-server" containerID="cri-o://c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0" gracePeriod=2 Feb 19 14:33:35 crc kubenswrapper[4861]: I0219 14:33:35.806383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1dd55ddf-97da-4b63-9239-1d5c18a70b92","Type":"ContainerStarted","Data":"2deb6c576e39f91563539cda6fd8201a22bc28ce625f4715750a3cc6063bcbb3"} Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.328375 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.524017 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-catalog-content\") pod \"12fd7701-b67d-481e-952e-dc3acef69fc2\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.524156 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ncd5\" (UniqueName: \"kubernetes.io/projected/12fd7701-b67d-481e-952e-dc3acef69fc2-kube-api-access-8ncd5\") pod \"12fd7701-b67d-481e-952e-dc3acef69fc2\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.524330 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-utilities\") pod \"12fd7701-b67d-481e-952e-dc3acef69fc2\" (UID: \"12fd7701-b67d-481e-952e-dc3acef69fc2\") " Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.525305 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-utilities" (OuterVolumeSpecName: "utilities") pod "12fd7701-b67d-481e-952e-dc3acef69fc2" (UID: "12fd7701-b67d-481e-952e-dc3acef69fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.538861 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12fd7701-b67d-481e-952e-dc3acef69fc2-kube-api-access-8ncd5" (OuterVolumeSpecName: "kube-api-access-8ncd5") pod "12fd7701-b67d-481e-952e-dc3acef69fc2" (UID: "12fd7701-b67d-481e-952e-dc3acef69fc2"). InnerVolumeSpecName "kube-api-access-8ncd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.613699 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12fd7701-b67d-481e-952e-dc3acef69fc2" (UID: "12fd7701-b67d-481e-952e-dc3acef69fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.627209 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.627399 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ncd5\" (UniqueName: \"kubernetes.io/projected/12fd7701-b67d-481e-952e-dc3acef69fc2-kube-api-access-8ncd5\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.627637 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fd7701-b67d-481e-952e-dc3acef69fc2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.822956 4861 generic.go:334] "Generic (PLEG): container finished" podID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerID="c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0" exitCode=0 Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.823032 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66wxr" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.823093 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerDied","Data":"c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0"} Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.823318 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66wxr" event={"ID":"12fd7701-b67d-481e-952e-dc3acef69fc2","Type":"ContainerDied","Data":"7818370d6f6c69707cedf60f54a5b2098574507b400de607644cfff15d4572f4"} Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.823401 4861 scope.go:117] "RemoveContainer" containerID="c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.826119 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de081c0e-da11-4fbd-a24c-f38f6800df56","Type":"ContainerStarted","Data":"a526b8459cb2ada4472847a5e76a7acd8fa451a21c9f07c4171f020180b86b26"} Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.854270 4861 scope.go:117] "RemoveContainer" containerID="618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.897587 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66wxr"] Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.897620 4861 scope.go:117] "RemoveContainer" containerID="81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.902665 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-66wxr"] Feb 19 14:33:36 crc kubenswrapper[4861]: E0219 14:33:36.917269 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12fd7701_b67d_481e_952e_dc3acef69fc2.slice/crio-7818370d6f6c69707cedf60f54a5b2098574507b400de607644cfff15d4572f4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12fd7701_b67d_481e_952e_dc3acef69fc2.slice\": RecentStats: unable to find data in memory cache]" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.934076 4861 scope.go:117] "RemoveContainer" containerID="c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0" Feb 19 14:33:36 crc kubenswrapper[4861]: E0219 14:33:36.934572 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0\": container with ID starting with c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0 not found: ID does not exist" containerID="c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.934606 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0"} err="failed to get container status \"c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0\": rpc error: code = NotFound desc = could not find container \"c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0\": container with ID starting with c56ecebf538916a12e217c09d920322365ecd45f63315c19bd274d710b1351e0 not found: ID does not exist" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.934629 4861 scope.go:117] "RemoveContainer" containerID="618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf" Feb 19 14:33:36 crc kubenswrapper[4861]: E0219 14:33:36.935096 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf\": container with ID starting with 618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf not found: ID does not exist" containerID="618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.935124 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf"} err="failed to get container status \"618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf\": rpc error: code = NotFound desc = could not find container \"618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf\": container with ID starting with 618a7f826c07423b518d34a656cd753b705502404791bc70efc7e9853b943ccf not found: ID does not exist" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.935141 4861 scope.go:117] "RemoveContainer" containerID="81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f" Feb 19 14:33:36 crc kubenswrapper[4861]: E0219 14:33:36.935410 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f\": container with ID starting with 81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f not found: ID does not exist" containerID="81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f" Feb 19 14:33:36 crc kubenswrapper[4861]: I0219 14:33:36.935492 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f"} err="failed to get container status \"81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f\": rpc error: code = NotFound desc = could not find container \"81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f\": container with ID starting with 81588ebb2f915dd4932216ae9911edeb37a27656083e87d20c2b3b979d9b997f not found: ID does not exist" Feb 19 14:33:37 crc kubenswrapper[4861]: I0219 14:33:37.988782 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" path="/var/lib/kubelet/pods/12fd7701-b67d-481e-952e-dc3acef69fc2/volumes" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.738260 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4d9v7"] Feb 19 14:33:39 crc kubenswrapper[4861]: E0219 14:33:39.738882 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="registry-server" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.738896 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="registry-server" Feb 19 14:33:39 crc kubenswrapper[4861]: E0219 14:33:39.738918 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="extract-utilities" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.738927 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="extract-utilities" Feb 19 14:33:39 crc kubenswrapper[4861]: E0219 14:33:39.738949 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="extract-content" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.738956 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="extract-content" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.739086 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="12fd7701-b67d-481e-952e-dc3acef69fc2" containerName="registry-server" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.740107 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.762603 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d9v7"] Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.889051 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-utilities\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.889197 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-catalog-content\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.889288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6gs\" (UniqueName: \"kubernetes.io/projected/30337e39-ce62-4089-9682-3b9488de020f-kube-api-access-bn6gs\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.991376 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6gs\" (UniqueName: \"kubernetes.io/projected/30337e39-ce62-4089-9682-3b9488de020f-kube-api-access-bn6gs\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.991586 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-utilities\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.991644 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-catalog-content\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.992167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-utilities\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:39 crc kubenswrapper[4861]: I0219 14:33:39.992190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-catalog-content\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:40 crc kubenswrapper[4861]: I0219 14:33:40.024546 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6gs\" (UniqueName: \"kubernetes.io/projected/30337e39-ce62-4089-9682-3b9488de020f-kube-api-access-bn6gs\") pod \"certified-operators-4d9v7\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:40 crc kubenswrapper[4861]: I0219 14:33:40.072246 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:40 crc kubenswrapper[4861]: W0219 14:33:40.530789 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30337e39_ce62_4089_9682_3b9488de020f.slice/crio-8c1d11081ccc1b15a93770bc428682c0d137f39f31690ac00ad85a27ed76fb8e WatchSource:0}: Error finding container 8c1d11081ccc1b15a93770bc428682c0d137f39f31690ac00ad85a27ed76fb8e: Status 404 returned error can't find the container with id 8c1d11081ccc1b15a93770bc428682c0d137f39f31690ac00ad85a27ed76fb8e Feb 19 14:33:40 crc kubenswrapper[4861]: I0219 14:33:40.552240 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d9v7"] Feb 19 14:33:40 crc kubenswrapper[4861]: I0219 14:33:40.863930 4861 generic.go:334] "Generic (PLEG): container finished" podID="30337e39-ce62-4089-9682-3b9488de020f" containerID="be60d2a7266a9faf3cf09578bc85a741e59064b9d62d82775752485f116184cd" exitCode=0 Feb 19 14:33:40 crc kubenswrapper[4861]: I0219 14:33:40.863976 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerDied","Data":"be60d2a7266a9faf3cf09578bc85a741e59064b9d62d82775752485f116184cd"} Feb 19 14:33:40 crc kubenswrapper[4861]: I0219 14:33:40.864011 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerStarted","Data":"8c1d11081ccc1b15a93770bc428682c0d137f39f31690ac00ad85a27ed76fb8e"} Feb 19 14:33:41 crc kubenswrapper[4861]: I0219 14:33:41.879155 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerStarted","Data":"0511851165b8ebe00a5aed98afd9ef6c7a4a636a92a00d477369da017e71ca4e"} Feb 19 14:33:42 crc kubenswrapper[4861]: I0219 14:33:42.893158 4861 generic.go:334] "Generic (PLEG): container finished" podID="30337e39-ce62-4089-9682-3b9488de020f" containerID="0511851165b8ebe00a5aed98afd9ef6c7a4a636a92a00d477369da017e71ca4e" exitCode=0 Feb 19 14:33:42 crc kubenswrapper[4861]: I0219 14:33:42.893224 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerDied","Data":"0511851165b8ebe00a5aed98afd9ef6c7a4a636a92a00d477369da017e71ca4e"} Feb 19 14:33:43 crc kubenswrapper[4861]: I0219 14:33:43.910345 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerStarted","Data":"61b2d200e127baeb408cb00e30af2975998ffb5e5505c6aa9595163816c74969"} Feb 19 14:33:43 crc kubenswrapper[4861]: I0219 14:33:43.940214 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4d9v7" podStartSLOduration=2.394855175 podStartE2EDuration="4.940196378s" podCreationTimestamp="2026-02-19 14:33:39 +0000 UTC" firstStartedPulling="2026-02-19 14:33:40.866021921 +0000 UTC m=+5035.527125149" lastFinishedPulling="2026-02-19 14:33:43.411363094 +0000 UTC m=+5038.072466352" observedRunningTime="2026-02-19 14:33:43.936346485 +0000 UTC m=+5038.597449743" watchObservedRunningTime="2026-02-19 14:33:43.940196378 +0000 UTC m=+5038.601299616" Feb 19 14:33:50 crc kubenswrapper[4861]: I0219 14:33:50.073690 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:50 crc kubenswrapper[4861]: I0219 14:33:50.074704 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:50 crc kubenswrapper[4861]: I0219 14:33:50.160510 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:51 crc kubenswrapper[4861]: I0219 14:33:51.030629 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:51 crc kubenswrapper[4861]: I0219 14:33:51.087244 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4d9v7"] Feb 19 14:33:52 crc kubenswrapper[4861]: I0219 14:33:52.999097 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4d9v7" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="registry-server" containerID="cri-o://61b2d200e127baeb408cb00e30af2975998ffb5e5505c6aa9595163816c74969" gracePeriod=2 Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.011956 4861 generic.go:334] "Generic (PLEG): container finished" podID="30337e39-ce62-4089-9682-3b9488de020f" containerID="61b2d200e127baeb408cb00e30af2975998ffb5e5505c6aa9595163816c74969" exitCode=0 Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.012039 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerDied","Data":"61b2d200e127baeb408cb00e30af2975998ffb5e5505c6aa9595163816c74969"} Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.629798 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.747391 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-catalog-content\") pod \"30337e39-ce62-4089-9682-3b9488de020f\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.747553 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-utilities\") pod \"30337e39-ce62-4089-9682-3b9488de020f\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.747699 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6gs\" (UniqueName: \"kubernetes.io/projected/30337e39-ce62-4089-9682-3b9488de020f-kube-api-access-bn6gs\") pod \"30337e39-ce62-4089-9682-3b9488de020f\" (UID: \"30337e39-ce62-4089-9682-3b9488de020f\") " Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.748585 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-utilities" (OuterVolumeSpecName: "utilities") pod "30337e39-ce62-4089-9682-3b9488de020f" (UID: "30337e39-ce62-4089-9682-3b9488de020f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.755768 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30337e39-ce62-4089-9682-3b9488de020f-kube-api-access-bn6gs" (OuterVolumeSpecName: "kube-api-access-bn6gs") pod "30337e39-ce62-4089-9682-3b9488de020f" (UID: "30337e39-ce62-4089-9682-3b9488de020f"). InnerVolumeSpecName "kube-api-access-bn6gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.817341 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30337e39-ce62-4089-9682-3b9488de020f" (UID: "30337e39-ce62-4089-9682-3b9488de020f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.850149 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.850194 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30337e39-ce62-4089-9682-3b9488de020f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:54 crc kubenswrapper[4861]: I0219 14:33:54.850208 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6gs\" (UniqueName: \"kubernetes.io/projected/30337e39-ce62-4089-9682-3b9488de020f-kube-api-access-bn6gs\") on node \"crc\" DevicePath \"\"" Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.026512 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d9v7" Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.026410 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9v7" event={"ID":"30337e39-ce62-4089-9682-3b9488de020f","Type":"ContainerDied","Data":"8c1d11081ccc1b15a93770bc428682c0d137f39f31690ac00ad85a27ed76fb8e"} Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.026636 4861 scope.go:117] "RemoveContainer" containerID="61b2d200e127baeb408cb00e30af2975998ffb5e5505c6aa9595163816c74969" Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.064134 4861 scope.go:117] "RemoveContainer" containerID="0511851165b8ebe00a5aed98afd9ef6c7a4a636a92a00d477369da017e71ca4e" Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.099853 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4d9v7"] Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.117999 4861 scope.go:117] "RemoveContainer" containerID="be60d2a7266a9faf3cf09578bc85a741e59064b9d62d82775752485f116184cd" Feb 19 14:33:55 crc kubenswrapper[4861]: I0219 14:33:55.128182 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4d9v7"] Feb 19 14:33:56 crc kubenswrapper[4861]: I0219 14:33:56.004079 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30337e39-ce62-4089-9682-3b9488de020f" path="/var/lib/kubelet/pods/30337e39-ce62-4089-9682-3b9488de020f/volumes" Feb 19 14:34:03 crc kubenswrapper[4861]: I0219 14:34:03.835490 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:34:03 crc kubenswrapper[4861]: I0219 14:34:03.836383 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:34:08 crc kubenswrapper[4861]: I0219 14:34:08.160156 4861 generic.go:334] "Generic (PLEG): container finished" podID="1dd55ddf-97da-4b63-9239-1d5c18a70b92" containerID="2deb6c576e39f91563539cda6fd8201a22bc28ce625f4715750a3cc6063bcbb3" exitCode=0 Feb 19 14:34:08 crc kubenswrapper[4861]: I0219 14:34:08.160738 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1dd55ddf-97da-4b63-9239-1d5c18a70b92","Type":"ContainerDied","Data":"2deb6c576e39f91563539cda6fd8201a22bc28ce625f4715750a3cc6063bcbb3"} Feb 19 14:34:09 crc kubenswrapper[4861]: I0219 14:34:09.168828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1dd55ddf-97da-4b63-9239-1d5c18a70b92","Type":"ContainerStarted","Data":"40b9b9c4b24ea9a4059e200f24ef1f2cde8d3961640034f907091d67f6a727ce"} Feb 19 14:34:09 crc kubenswrapper[4861]: I0219 14:34:09.170409 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 14:34:09 crc kubenswrapper[4861]: I0219 14:34:09.197979 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.197958301 podStartE2EDuration="37.197958301s" podCreationTimestamp="2026-02-19 14:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:34:09.191217439 +0000 UTC m=+5063.852320687" watchObservedRunningTime="2026-02-19 14:34:09.197958301 +0000 UTC m=+5063.859061539" Feb 19 14:34:10 crc kubenswrapper[4861]: I0219 14:34:10.179837 4861 generic.go:334] "Generic (PLEG): container finished" podID="de081c0e-da11-4fbd-a24c-f38f6800df56" containerID="a526b8459cb2ada4472847a5e76a7acd8fa451a21c9f07c4171f020180b86b26" exitCode=0 Feb 19 14:34:10 crc kubenswrapper[4861]: I0219 14:34:10.179953 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de081c0e-da11-4fbd-a24c-f38f6800df56","Type":"ContainerDied","Data":"a526b8459cb2ada4472847a5e76a7acd8fa451a21c9f07c4171f020180b86b26"} Feb 19 14:34:11 crc kubenswrapper[4861]: I0219 14:34:11.192161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de081c0e-da11-4fbd-a24c-f38f6800df56","Type":"ContainerStarted","Data":"76c74634b71c8207f0793779f12e904fff86d00291dee45b268ec06da3ad2547"} Feb 19 14:34:11 crc kubenswrapper[4861]: I0219 14:34:11.192818 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:34:11 crc kubenswrapper[4861]: I0219 14:34:11.229716 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.229683451 podStartE2EDuration="38.229683451s" podCreationTimestamp="2026-02-19 14:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:34:11.228604422 +0000 UTC m=+5065.889707710" watchObservedRunningTime="2026-02-19 14:34:11.229683451 +0000 UTC m=+5065.890786719" Feb 19 14:34:23 crc kubenswrapper[4861]: I0219 14:34:23.206680 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 14:34:24 crc kubenswrapper[4861]: I0219 14:34:24.197757 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.135667 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 14:34:27 crc kubenswrapper[4861]: E0219 14:34:27.136754 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="registry-server" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.136782 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="registry-server" Feb 19 14:34:27 crc kubenswrapper[4861]: E0219 14:34:27.136798 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="extract-content" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.136810 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="extract-content" Feb 19 14:34:27 crc kubenswrapper[4861]: E0219 14:34:27.136854 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="extract-utilities" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.136867 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="extract-utilities" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.137155 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="30337e39-ce62-4089-9682-3b9488de020f" containerName="registry-server" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.138086 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.141298 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9c7sv" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.148200 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.198137 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vr9r\" (UniqueName: \"kubernetes.io/projected/70e2a1a9-5a29-468a-8c41-188b588d9d37-kube-api-access-2vr9r\") pod \"mariadb-client\" (UID: \"70e2a1a9-5a29-468a-8c41-188b588d9d37\") " pod="openstack/mariadb-client" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.300129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vr9r\" (UniqueName: \"kubernetes.io/projected/70e2a1a9-5a29-468a-8c41-188b588d9d37-kube-api-access-2vr9r\") pod \"mariadb-client\" (UID: \"70e2a1a9-5a29-468a-8c41-188b588d9d37\") " pod="openstack/mariadb-client" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.338297 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vr9r\" (UniqueName: \"kubernetes.io/projected/70e2a1a9-5a29-468a-8c41-188b588d9d37-kube-api-access-2vr9r\") pod \"mariadb-client\" (UID: \"70e2a1a9-5a29-468a-8c41-188b588d9d37\") " pod="openstack/mariadb-client" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.480326 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:34:27 crc kubenswrapper[4861]: I0219 14:34:27.885413 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:34:27 crc kubenswrapper[4861]: W0219 14:34:27.886077 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70e2a1a9_5a29_468a_8c41_188b588d9d37.slice/crio-ce8e6c2cfd64ad010b18772eada1ddd39f2f1a925ce80fc0323a389e03fcca67 WatchSource:0}: Error finding container ce8e6c2cfd64ad010b18772eada1ddd39f2f1a925ce80fc0323a389e03fcca67: Status 404 returned error can't find the container with id ce8e6c2cfd64ad010b18772eada1ddd39f2f1a925ce80fc0323a389e03fcca67 Feb 19 14:34:28 crc kubenswrapper[4861]: I0219 14:34:28.356319 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"70e2a1a9-5a29-468a-8c41-188b588d9d37","Type":"ContainerStarted","Data":"ce8e6c2cfd64ad010b18772eada1ddd39f2f1a925ce80fc0323a389e03fcca67"} Feb 19 14:34:29 crc kubenswrapper[4861]: I0219 14:34:29.370107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"70e2a1a9-5a29-468a-8c41-188b588d9d37","Type":"ContainerStarted","Data":"fdd04dc6756085ca5fd7c66bc5b8f6e301b621f9f10f6075b0915eba2a2e8fce"} Feb 19 14:34:29 crc kubenswrapper[4861]: I0219 14:34:29.396254 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.953652148 podStartE2EDuration="2.396225316s" podCreationTimestamp="2026-02-19 14:34:27 +0000 UTC" firstStartedPulling="2026-02-19 14:34:27.89039879 +0000 UTC m=+5082.551502028" lastFinishedPulling="2026-02-19 14:34:28.332971968 +0000 UTC m=+5082.994075196" observedRunningTime="2026-02-19 14:34:29.387677066 +0000 UTC m=+5084.048780324" watchObservedRunningTime="2026-02-19 14:34:29.396225316 +0000 UTC m=+5084.057328574" Feb 19 14:34:33 crc kubenswrapper[4861]: I0219 14:34:33.834899 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:34:33 crc kubenswrapper[4861]: I0219 14:34:33.835690 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:34:33 crc kubenswrapper[4861]: I0219 14:34:33.835755 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:34:33 crc kubenswrapper[4861]: I0219 14:34:33.836674 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56bbaab0b2802fda73b95514c34af1462b3e65c35d413a549965bd83c5d35b4f"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:34:33 crc kubenswrapper[4861]: I0219 14:34:33.836794 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://56bbaab0b2802fda73b95514c34af1462b3e65c35d413a549965bd83c5d35b4f" gracePeriod=600 Feb 19 14:34:34 crc kubenswrapper[4861]: I0219 14:34:34.418474 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="56bbaab0b2802fda73b95514c34af1462b3e65c35d413a549965bd83c5d35b4f" exitCode=0 Feb 19 14:34:34 crc kubenswrapper[4861]: I0219 14:34:34.418508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"56bbaab0b2802fda73b95514c34af1462b3e65c35d413a549965bd83c5d35b4f"} Feb 19 14:34:34 crc kubenswrapper[4861]: I0219 14:34:34.418795 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad"} Feb 19 14:34:34 crc kubenswrapper[4861]: I0219 14:34:34.418826 4861 scope.go:117] "RemoveContainer" containerID="ea1fb9dda6615e7bd0de449175c0c11a8d4523b9efb63c9fbf83b974def3474f" Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.256034 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.258131 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="70e2a1a9-5a29-468a-8c41-188b588d9d37" containerName="mariadb-client" containerID="cri-o://fdd04dc6756085ca5fd7c66bc5b8f6e301b621f9f10f6075b0915eba2a2e8fce" gracePeriod=30 Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.505012 4861 generic.go:334] "Generic (PLEG): container finished" podID="70e2a1a9-5a29-468a-8c41-188b588d9d37" containerID="fdd04dc6756085ca5fd7c66bc5b8f6e301b621f9f10f6075b0915eba2a2e8fce" exitCode=143 Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.505076 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"70e2a1a9-5a29-468a-8c41-188b588d9d37","Type":"ContainerDied","Data":"fdd04dc6756085ca5fd7c66bc5b8f6e301b621f9f10f6075b0915eba2a2e8fce"} Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.890223 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.980852 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vr9r\" (UniqueName: \"kubernetes.io/projected/70e2a1a9-5a29-468a-8c41-188b588d9d37-kube-api-access-2vr9r\") pod \"70e2a1a9-5a29-468a-8c41-188b588d9d37\" (UID: \"70e2a1a9-5a29-468a-8c41-188b588d9d37\") " Feb 19 14:34:43 crc kubenswrapper[4861]: I0219 14:34:43.990257 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e2a1a9-5a29-468a-8c41-188b588d9d37-kube-api-access-2vr9r" (OuterVolumeSpecName: "kube-api-access-2vr9r") pod "70e2a1a9-5a29-468a-8c41-188b588d9d37" (UID: "70e2a1a9-5a29-468a-8c41-188b588d9d37"). InnerVolumeSpecName "kube-api-access-2vr9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:34:44 crc kubenswrapper[4861]: I0219 14:34:44.083175 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vr9r\" (UniqueName: \"kubernetes.io/projected/70e2a1a9-5a29-468a-8c41-188b588d9d37-kube-api-access-2vr9r\") on node \"crc\" DevicePath \"\"" Feb 19 14:34:44 crc kubenswrapper[4861]: I0219 14:34:44.518253 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"70e2a1a9-5a29-468a-8c41-188b588d9d37","Type":"ContainerDied","Data":"ce8e6c2cfd64ad010b18772eada1ddd39f2f1a925ce80fc0323a389e03fcca67"} Feb 19 14:34:44 crc kubenswrapper[4861]: I0219 14:34:44.518329 4861 scope.go:117] "RemoveContainer" containerID="fdd04dc6756085ca5fd7c66bc5b8f6e301b621f9f10f6075b0915eba2a2e8fce" Feb 19 14:34:44 crc kubenswrapper[4861]: I0219 14:34:44.518375 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:34:44 crc kubenswrapper[4861]: I0219 14:34:44.553097 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:34:44 crc kubenswrapper[4861]: I0219 14:34:44.565519 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:34:45 crc kubenswrapper[4861]: I0219 14:34:45.995402 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e2a1a9-5a29-468a-8c41-188b588d9d37" path="/var/lib/kubelet/pods/70e2a1a9-5a29-468a-8c41-188b588d9d37/volumes" Feb 19 14:37:03 crc kubenswrapper[4861]: I0219 14:37:03.235282 4861 scope.go:117] "RemoveContainer" containerID="5024d127ccfb4712118b054f67084a25c3b047106a5c3bcb3039f88de9d26f03" Feb 19 14:37:03 crc kubenswrapper[4861]: I0219 14:37:03.834528 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:37:03 crc kubenswrapper[4861]: I0219 14:37:03.834624 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:37:33 crc kubenswrapper[4861]: I0219 14:37:33.834737 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:37:33 crc kubenswrapper[4861]: I0219 14:37:33.835533 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:38:03 crc kubenswrapper[4861]: I0219 14:38:03.834779 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:38:03 crc kubenswrapper[4861]: I0219 14:38:03.835390 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:38:03 crc kubenswrapper[4861]: I0219 14:38:03.835483 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:38:03 crc kubenswrapper[4861]: I0219 14:38:03.836391 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:38:03 crc kubenswrapper[4861]: I0219 14:38:03.836523 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" gracePeriod=600 Feb 19 14:38:03 crc kubenswrapper[4861]: E0219 14:38:03.979571 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:38:04 crc kubenswrapper[4861]: I0219 14:38:04.343932 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" exitCode=0 Feb 19 14:38:04 crc kubenswrapper[4861]: I0219 14:38:04.344014 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad"} Feb 19 14:38:04 crc kubenswrapper[4861]: I0219 14:38:04.344124 4861 scope.go:117] "RemoveContainer" containerID="56bbaab0b2802fda73b95514c34af1462b3e65c35d413a549965bd83c5d35b4f" Feb 19 14:38:04 crc kubenswrapper[4861]: I0219 14:38:04.344900 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:38:04 crc kubenswrapper[4861]: E0219 14:38:04.345333 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:38:09 crc kubenswrapper[4861]: I0219 14:38:09.940919 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 14:38:09 crc kubenswrapper[4861]: E0219 14:38:09.941972 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e2a1a9-5a29-468a-8c41-188b588d9d37" containerName="mariadb-client" Feb 19 14:38:09 crc kubenswrapper[4861]: I0219 14:38:09.942006 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e2a1a9-5a29-468a-8c41-188b588d9d37" containerName="mariadb-client" Feb 19 14:38:09 crc kubenswrapper[4861]: I0219 14:38:09.942367 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e2a1a9-5a29-468a-8c41-188b588d9d37" containerName="mariadb-client" Feb 19 14:38:09 crc kubenswrapper[4861]: I0219 14:38:09.943525 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 14:38:09 crc kubenswrapper[4861]: I0219 14:38:09.947096 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9c7sv" Feb 19 14:38:09 crc kubenswrapper[4861]: I0219 14:38:09.957162 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.095887 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-554ff08e-f895-4592-bfab-e235a569c035\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.097796 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gbqd\" (UniqueName: \"kubernetes.io/projected/fd536e7b-cca2-47da-948a-629b72856c4b-kube-api-access-6gbqd\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.199667 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gbqd\" (UniqueName: \"kubernetes.io/projected/fd536e7b-cca2-47da-948a-629b72856c4b-kube-api-access-6gbqd\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.200030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-554ff08e-f895-4592-bfab-e235a569c035\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.205294 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.205484 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-554ff08e-f895-4592-bfab-e235a569c035\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2cb48cb56ed3d33b4b6c5de5311a6b0a24f580fd400755bcf1fc281166f84d5a/globalmount\"" pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.240656 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gbqd\" (UniqueName: \"kubernetes.io/projected/fd536e7b-cca2-47da-948a-629b72856c4b-kube-api-access-6gbqd\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.253127 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-554ff08e-f895-4592-bfab-e235a569c035\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") pod \"mariadb-copy-data\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.278772 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 14:38:10 crc kubenswrapper[4861]: I0219 14:38:10.679567 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 14:38:11 crc kubenswrapper[4861]: I0219 14:38:11.417507 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd536e7b-cca2-47da-948a-629b72856c4b","Type":"ContainerStarted","Data":"43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241"} Feb 19 14:38:11 crc kubenswrapper[4861]: I0219 14:38:11.418082 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd536e7b-cca2-47da-948a-629b72856c4b","Type":"ContainerStarted","Data":"9c327fde8a28858d15b556f58f0d59aa6185f56f64988f0d820bf88d29a499dd"} Feb 19 14:38:11 crc kubenswrapper[4861]: I0219 14:38:11.441948 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.441919366 podStartE2EDuration="3.441919366s" podCreationTimestamp="2026-02-19 14:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:38:11.435257958 +0000 UTC m=+5306.096361216" watchObservedRunningTime="2026-02-19 14:38:11.441919366 +0000 UTC m=+5306.103022624" Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.389812 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.395190 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.403113 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.477895 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57jr\" (UniqueName: \"kubernetes.io/projected/1e24edc1-f277-445e-bcf6-ebe3a31c000d-kube-api-access-k57jr\") pod \"mariadb-client\" (UID: \"1e24edc1-f277-445e-bcf6-ebe3a31c000d\") " pod="openstack/mariadb-client" Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.579873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k57jr\" (UniqueName: \"kubernetes.io/projected/1e24edc1-f277-445e-bcf6-ebe3a31c000d-kube-api-access-k57jr\") pod \"mariadb-client\" (UID: \"1e24edc1-f277-445e-bcf6-ebe3a31c000d\") " pod="openstack/mariadb-client" Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.617297 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k57jr\" (UniqueName: \"kubernetes.io/projected/1e24edc1-f277-445e-bcf6-ebe3a31c000d-kube-api-access-k57jr\") pod \"mariadb-client\" (UID: \"1e24edc1-f277-445e-bcf6-ebe3a31c000d\") " pod="openstack/mariadb-client" Feb 19 14:38:14 crc kubenswrapper[4861]: I0219 14:38:14.730068 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:15 crc kubenswrapper[4861]: I0219 14:38:15.049722 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:15 crc kubenswrapper[4861]: W0219 14:38:15.054695 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e24edc1_f277_445e_bcf6_ebe3a31c000d.slice/crio-249daef7f73587af23d8c934ea5e1bfc1e75d66ebf43dbb01825a280f461c825 WatchSource:0}: Error finding container 249daef7f73587af23d8c934ea5e1bfc1e75d66ebf43dbb01825a280f461c825: Status 404 returned error can't find the container with id 249daef7f73587af23d8c934ea5e1bfc1e75d66ebf43dbb01825a280f461c825 Feb 19 14:38:15 crc kubenswrapper[4861]: I0219 14:38:15.454866 4861 generic.go:334] "Generic (PLEG): container finished" podID="1e24edc1-f277-445e-bcf6-ebe3a31c000d" containerID="184a4301dc1907adda0921922c6c175f36c0e99f2eeab3495ea8f267bb443cbb" exitCode=0 Feb 19 14:38:15 crc kubenswrapper[4861]: I0219 14:38:15.455288 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1e24edc1-f277-445e-bcf6-ebe3a31c000d","Type":"ContainerDied","Data":"184a4301dc1907adda0921922c6c175f36c0e99f2eeab3495ea8f267bb443cbb"} Feb 19 14:38:15 crc kubenswrapper[4861]: I0219 14:38:15.455405 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1e24edc1-f277-445e-bcf6-ebe3a31c000d","Type":"ContainerStarted","Data":"249daef7f73587af23d8c934ea5e1bfc1e75d66ebf43dbb01825a280f461c825"} Feb 19 14:38:16 crc kubenswrapper[4861]: I0219 14:38:16.857143 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:16 crc kubenswrapper[4861]: I0219 14:38:16.883797 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_1e24edc1-f277-445e-bcf6-ebe3a31c000d/mariadb-client/0.log" Feb 19 14:38:16 crc kubenswrapper[4861]: I0219 14:38:16.910308 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:16 crc kubenswrapper[4861]: I0219 14:38:16.915678 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:16 crc kubenswrapper[4861]: I0219 14:38:16.916619 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k57jr\" (UniqueName: \"kubernetes.io/projected/1e24edc1-f277-445e-bcf6-ebe3a31c000d-kube-api-access-k57jr\") pod \"1e24edc1-f277-445e-bcf6-ebe3a31c000d\" (UID: \"1e24edc1-f277-445e-bcf6-ebe3a31c000d\") " Feb 19 14:38:16 crc kubenswrapper[4861]: I0219 14:38:16.922115 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e24edc1-f277-445e-bcf6-ebe3a31c000d-kube-api-access-k57jr" (OuterVolumeSpecName: "kube-api-access-k57jr") pod "1e24edc1-f277-445e-bcf6-ebe3a31c000d" (UID: "1e24edc1-f277-445e-bcf6-ebe3a31c000d"). InnerVolumeSpecName "kube-api-access-k57jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.018853 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k57jr\" (UniqueName: \"kubernetes.io/projected/1e24edc1-f277-445e-bcf6-ebe3a31c000d-kube-api-access-k57jr\") on node \"crc\" DevicePath \"\"" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.038741 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:17 crc kubenswrapper[4861]: E0219 14:38:17.039022 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24edc1-f277-445e-bcf6-ebe3a31c000d" containerName="mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.039032 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24edc1-f277-445e-bcf6-ebe3a31c000d" containerName="mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.039600 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e24edc1-f277-445e-bcf6-ebe3a31c000d" containerName="mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.040365 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.049870 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.121127 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z96z\" (UniqueName: \"kubernetes.io/projected/71816c52-46d5-433d-bbaf-9bae5d41fed5-kube-api-access-6z96z\") pod \"mariadb-client\" (UID: \"71816c52-46d5-433d-bbaf-9bae5d41fed5\") " pod="openstack/mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.222411 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z96z\" (UniqueName: \"kubernetes.io/projected/71816c52-46d5-433d-bbaf-9bae5d41fed5-kube-api-access-6z96z\") pod \"mariadb-client\" (UID: \"71816c52-46d5-433d-bbaf-9bae5d41fed5\") " pod="openstack/mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.246131 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z96z\" (UniqueName: \"kubernetes.io/projected/71816c52-46d5-433d-bbaf-9bae5d41fed5-kube-api-access-6z96z\") pod \"mariadb-client\" (UID: \"71816c52-46d5-433d-bbaf-9bae5d41fed5\") " pod="openstack/mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.362008 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.473491 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249daef7f73587af23d8c934ea5e1bfc1e75d66ebf43dbb01825a280f461c825" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.473551 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.492851 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="1e24edc1-f277-445e-bcf6-ebe3a31c000d" podUID="71816c52-46d5-433d-bbaf-9bae5d41fed5" Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.875887 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:17 crc kubenswrapper[4861]: W0219 14:38:17.884392 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71816c52_46d5_433d_bbaf_9bae5d41fed5.slice/crio-2943d89a20af36ad94e5710bd3decac33e283c7366e6b7dfa4b74725f9e37972 WatchSource:0}: Error finding container 2943d89a20af36ad94e5710bd3decac33e283c7366e6b7dfa4b74725f9e37972: Status 404 returned error can't find the container with id 2943d89a20af36ad94e5710bd3decac33e283c7366e6b7dfa4b74725f9e37972 Feb 19 14:38:17 crc kubenswrapper[4861]: I0219 14:38:17.992113 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e24edc1-f277-445e-bcf6-ebe3a31c000d" path="/var/lib/kubelet/pods/1e24edc1-f277-445e-bcf6-ebe3a31c000d/volumes" Feb 19 14:38:18 crc kubenswrapper[4861]: I0219 14:38:18.484757 4861 generic.go:334] "Generic (PLEG): container finished" podID="71816c52-46d5-433d-bbaf-9bae5d41fed5" containerID="215306febb129dacc450702cb6f321766437a214d5debb5722adb03b47bc6898" exitCode=0 Feb 19 14:38:18 crc kubenswrapper[4861]: I0219 14:38:18.484825 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"71816c52-46d5-433d-bbaf-9bae5d41fed5","Type":"ContainerDied","Data":"215306febb129dacc450702cb6f321766437a214d5debb5722adb03b47bc6898"} Feb 19 14:38:18 crc kubenswrapper[4861]: I0219 14:38:18.485116 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"71816c52-46d5-433d-bbaf-9bae5d41fed5","Type":"ContainerStarted","Data":"2943d89a20af36ad94e5710bd3decac33e283c7366e6b7dfa4b74725f9e37972"} Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.855604 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.872133 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_71816c52-46d5-433d-bbaf-9bae5d41fed5/mariadb-client/0.log" Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.895435 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.900915 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.977573 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:38:19 crc kubenswrapper[4861]: E0219 14:38:19.977837 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.990354 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z96z\" (UniqueName: \"kubernetes.io/projected/71816c52-46d5-433d-bbaf-9bae5d41fed5-kube-api-access-6z96z\") pod \"71816c52-46d5-433d-bbaf-9bae5d41fed5\" (UID: \"71816c52-46d5-433d-bbaf-9bae5d41fed5\") " Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.996146 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71816c52-46d5-433d-bbaf-9bae5d41fed5-kube-api-access-6z96z" (OuterVolumeSpecName: "kube-api-access-6z96z") pod "71816c52-46d5-433d-bbaf-9bae5d41fed5" (UID: "71816c52-46d5-433d-bbaf-9bae5d41fed5"). InnerVolumeSpecName "kube-api-access-6z96z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:38:19 crc kubenswrapper[4861]: I0219 14:38:19.998059 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71816c52-46d5-433d-bbaf-9bae5d41fed5" path="/var/lib/kubelet/pods/71816c52-46d5-433d-bbaf-9bae5d41fed5/volumes" Feb 19 14:38:20 crc kubenswrapper[4861]: I0219 14:38:20.092829 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z96z\" (UniqueName: \"kubernetes.io/projected/71816c52-46d5-433d-bbaf-9bae5d41fed5-kube-api-access-6z96z\") on node \"crc\" DevicePath \"\"" Feb 19 14:38:20 crc kubenswrapper[4861]: I0219 14:38:20.507557 4861 scope.go:117] "RemoveContainer" containerID="215306febb129dacc450702cb6f321766437a214d5debb5722adb03b47bc6898" Feb 19 14:38:20 crc kubenswrapper[4861]: I0219 14:38:20.507639 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 14:38:30 crc kubenswrapper[4861]: I0219 14:38:30.977804 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:38:30 crc kubenswrapper[4861]: E0219 14:38:30.978796 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:38:41 crc kubenswrapper[4861]: I0219 14:38:41.978790 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:38:41 crc kubenswrapper[4861]: E0219 14:38:41.979764 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:38:55 crc kubenswrapper[4861]: I0219 14:38:55.982356 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:38:55 crc kubenswrapper[4861]: E0219 14:38:55.983696 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.384097 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 14:38:58 crc kubenswrapper[4861]: E0219 14:38:58.384812 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71816c52-46d5-433d-bbaf-9bae5d41fed5" containerName="mariadb-client" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.384833 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="71816c52-46d5-433d-bbaf-9bae5d41fed5" containerName="mariadb-client" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.385114 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="71816c52-46d5-433d-bbaf-9bae5d41fed5" containerName="mariadb-client" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.386316 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.395051 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.402386 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.407085 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.407171 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.407221 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ghmfp" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.407518 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.422918 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.424474 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.426531 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c0421f7-ce54-4c46-9c05-c82787e349cf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.426676 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4c0421f7-ce54-4c46-9c05-c82787e349cf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.426812 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2x6n\" (UniqueName: \"kubernetes.io/projected/4c0421f7-ce54-4c46-9c05-c82787e349cf-kube-api-access-q2x6n\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.426881 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0421f7-ce54-4c46-9c05-c82787e349cf-config\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.427003 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.427061 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.427209 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.427268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.449523 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.450755 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.457371 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.468386 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528377 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62s8\" (UniqueName: \"kubernetes.io/projected/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-kube-api-access-j62s8\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528450 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528504 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528534 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528580 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528623 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c0421f7-ce54-4c46-9c05-c82787e349cf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528656 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4c0421f7-ce54-4c46-9c05-c82787e349cf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528687 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2x6n\" (UniqueName: \"kubernetes.io/projected/4c0421f7-ce54-4c46-9c05-c82787e349cf-kube-api-access-q2x6n\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528723 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528755 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-config\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528777 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0421f7-ce54-4c46-9c05-c82787e349cf-config\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528812 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-630638c2-704f-4e0b-9a59-886e33319488\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-630638c2-704f-4e0b-9a59-886e33319488\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.528891 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.530816 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c0421f7-ce54-4c46-9c05-c82787e349cf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.531176 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4c0421f7-ce54-4c46-9c05-c82787e349cf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.534987 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0421f7-ce54-4c46-9c05-c82787e349cf-config\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.536517 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.537079 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.539012 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.539163 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac31556765d1204304774e96d427e16cb598a3539b335de7f6badf31c96b3ffd/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.540222 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0421f7-ce54-4c46-9c05-c82787e349cf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.570001 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2x6n\" (UniqueName: \"kubernetes.io/projected/4c0421f7-ce54-4c46-9c05-c82787e349cf-kube-api-access-q2x6n\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.607060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9fbc5b1-ed8e-4ad6-9360-1e37acbb62e7\") pod \"ovsdbserver-nb-0\" (UID: \"4c0421f7-ce54-4c46-9c05-c82787e349cf\") " pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630485 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630527 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630610 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7303b9-35dd-4227-b008-5f0882fcb06d-config\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630633 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630655 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a7303b9-35dd-4227-b008-5f0882fcb06d-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630672 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-config\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630700 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-630638c2-704f-4e0b-9a59-886e33319488\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-630638c2-704f-4e0b-9a59-886e33319488\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630721 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630743 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630761 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmw7g\" (UniqueName: \"kubernetes.io/projected/8a7303b9-35dd-4227-b008-5f0882fcb06d-kube-api-access-wmw7g\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630786 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a7303b9-35dd-4227-b008-5f0882fcb06d-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630830 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j62s8\" (UniqueName: \"kubernetes.io/projected/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-kube-api-access-j62s8\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630848 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.630866 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.631779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.632632 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.634643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.635197 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-config\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.636062 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.636721 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.636816 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-630638c2-704f-4e0b-9a59-886e33319488\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-630638c2-704f-4e0b-9a59-886e33319488\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e78068a990af3fa5eef09e08911376f3143d4dc5af5b101483acc3f06e8561fd/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.638007 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.652867 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j62s8\" (UniqueName: \"kubernetes.io/projected/c6508e9c-43bc-47c5-b3ba-44ee12181b5e-kube-api-access-j62s8\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.661986 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-630638c2-704f-4e0b-9a59-886e33319488\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-630638c2-704f-4e0b-9a59-886e33319488\") pod \"ovsdbserver-nb-2\" (UID: \"c6508e9c-43bc-47c5-b3ba-44ee12181b5e\") " pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.719453 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732625 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732670 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7303b9-35dd-4227-b008-5f0882fcb06d-config\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732705 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a7303b9-35dd-4227-b008-5f0882fcb06d-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732741 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmw7g\" (UniqueName: \"kubernetes.io/projected/8a7303b9-35dd-4227-b008-5f0882fcb06d-kube-api-access-wmw7g\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732790 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732808 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a7303b9-35dd-4227-b008-5f0882fcb06d-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.732834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.733914 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a7303b9-35dd-4227-b008-5f0882fcb06d-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.733923 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7303b9-35dd-4227-b008-5f0882fcb06d-config\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.734235 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a7303b9-35dd-4227-b008-5f0882fcb06d-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.735707 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.735742 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dd2e762c4f7ae797b44bbd5c4c4b5234a64f15bc61cf5714b113894c0a8ce0d4/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.736328 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.737013 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.739569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7303b9-35dd-4227-b008-5f0882fcb06d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.751305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmw7g\" (UniqueName: \"kubernetes.io/projected/8a7303b9-35dd-4227-b008-5f0882fcb06d-kube-api-access-wmw7g\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.763921 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db85302-e71b-40c6-81ff-adcc1b4095a6\") pod \"ovsdbserver-nb-1\" (UID: \"8a7303b9-35dd-4227-b008-5f0882fcb06d\") " pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:58 crc kubenswrapper[4861]: I0219 14:38:58.776283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.049304 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.062862 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.152332 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 14:38:59 crc kubenswrapper[4861]: W0219 14:38:59.167230 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6508e9c_43bc_47c5_b3ba_44ee12181b5e.slice/crio-a83d975456a42f8961592bca524a0726f056ac61b9bbf66bb8989caea17df6b9 WatchSource:0}: Error finding container a83d975456a42f8961592bca524a0726f056ac61b9bbf66bb8989caea17df6b9: Status 404 returned error can't find the container with id a83d975456a42f8961592bca524a0726f056ac61b9bbf66bb8989caea17df6b9 Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.531162 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 14:38:59 crc kubenswrapper[4861]: W0219 14:38:59.535680 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a7303b9_35dd_4227_b008_5f0882fcb06d.slice/crio-bc7c805f97731a5374efa2bb1b8f35197d3b6ca0aa3690cff92c58d68cc3b562 WatchSource:0}: Error finding container bc7c805f97731a5374efa2bb1b8f35197d3b6ca0aa3690cff92c58d68cc3b562: Status 404 returned error can't find the container with id bc7c805f97731a5374efa2bb1b8f35197d3b6ca0aa3690cff92c58d68cc3b562 Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.598897 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.600242 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.603141 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4db7p" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.603679 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.603696 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.603826 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.615677 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.617195 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.621816 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.664751 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.666047 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.680440 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.699993 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.748965 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.748996 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749032 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749052 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749066 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7e74a40-f854-4731-aacc-38bc3949ce38-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749082 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749102 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7e74a40-f854-4731-aacc-38bc3949ce38-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749264 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-config\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749449 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749504 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-config\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749549 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749577 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmgq\" (UniqueName: \"kubernetes.io/projected/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-kube-api-access-nmmgq\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749605 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7e74a40-f854-4731-aacc-38bc3949ce38-config\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749635 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749662 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749687 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjgm\" (UniqueName: \"kubernetes.io/projected/a7e74a40-f854-4731-aacc-38bc3949ce38-kube-api-access-zgjgm\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749715 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749760 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvc9f\" (UniqueName: \"kubernetes.io/projected/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-kube-api-access-zvc9f\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749789 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749824 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.749886 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.851413 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.851817 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7e74a40-f854-4731-aacc-38bc3949ce38-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.851867 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.851895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7e74a40-f854-4731-aacc-38bc3949ce38-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.851913 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.851981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852003 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-config\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852045 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852078 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852095 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-config\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852110 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852132 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmgq\" (UniqueName: \"kubernetes.io/projected/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-kube-api-access-nmmgq\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7e74a40-f854-4731-aacc-38bc3949ce38-config\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852177 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852196 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852213 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjgm\" (UniqueName: \"kubernetes.io/projected/a7e74a40-f854-4731-aacc-38bc3949ce38-kube-api-access-zgjgm\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852229 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852248 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvc9f\" (UniqueName: \"kubernetes.io/projected/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-kube-api-access-zvc9f\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852266 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852283 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852314 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852333 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852351 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852383 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.852823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7e74a40-f854-4731-aacc-38bc3949ce38-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.853188 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c6508e9c-43bc-47c5-b3ba-44ee12181b5e","Type":"ContainerStarted","Data":"e522ef822eabf8f5c5187ef5620b59839bdc93668726866faf74006b2f577cee"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.853234 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c6508e9c-43bc-47c5-b3ba-44ee12181b5e","Type":"ContainerStarted","Data":"c9a139794819a88e677dda007a41659d64203c098608497aa88bee742da59065"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.853251 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c6508e9c-43bc-47c5-b3ba-44ee12181b5e","Type":"ContainerStarted","Data":"a83d975456a42f8961592bca524a0726f056ac61b9bbf66bb8989caea17df6b9"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.853541 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-config\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.854663 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.854907 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.855307 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8a7303b9-35dd-4227-b008-5f0882fcb06d","Type":"ContainerStarted","Data":"a4db700c166ec05c5bb45bd2595419ee7bc1fc6578936771e981c65c83dc76a7"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.855354 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8a7303b9-35dd-4227-b008-5f0882fcb06d","Type":"ContainerStarted","Data":"bc7c805f97731a5374efa2bb1b8f35197d3b6ca0aa3690cff92c58d68cc3b562"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.855823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.857331 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4c0421f7-ce54-4c46-9c05-c82787e349cf","Type":"ContainerStarted","Data":"dd96af3e8c9125cfdfcd9d12ae9ded89a5c43e6303d0dbdb382e04c85f3ef3bb"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.857409 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4c0421f7-ce54-4c46-9c05-c82787e349cf","Type":"ContainerStarted","Data":"ea6e7d93f8e507ee7ffc339ac48c6569db65eecfdee719d51387832be6b1e639"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.857433 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4c0421f7-ce54-4c46-9c05-c82787e349cf","Type":"ContainerStarted","Data":"8cc27e4512e627755b1fc6935d143dd3b69e3cb77df87f1d40999bad72de71b2"} Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.861929 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.861986 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0627499c2375e9842b018af05162d85bcfe97e6905b99d373a746fcef9966576/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-config\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.861932 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862192 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a83c39b011a703aee21bb5339ad9d54ee7be829a120f2728cd9fc7218359e529/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862134 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862371 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862593 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862742 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7e74a40-f854-4731-aacc-38bc3949ce38-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862794 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7e74a40-f854-4731-aacc-38bc3949ce38-config\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.862809 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.863295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.863563 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e74a40-f854-4731-aacc-38bc3949ce38-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.863622 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.867750 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.868218 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.873467 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.874442 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=2.874428595 podStartE2EDuration="2.874428595s" podCreationTimestamp="2026-02-19 14:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:38:59.870234383 +0000 UTC m=+5354.531337611" watchObservedRunningTime="2026-02-19 14:38:59.874428595 +0000 UTC m=+5354.535531823" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.874795 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmgq\" (UniqueName: \"kubernetes.io/projected/c2ea5f78-24ed-484c-94fb-de46d8dfdb09-kube-api-access-nmmgq\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.876390 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.876538 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/89fbfcb2226eb55f0375f7bc570f7e8fd72afd538ed205dfd20097b80739f24f/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.879358 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjgm\" (UniqueName: \"kubernetes.io/projected/a7e74a40-f854-4731-aacc-38bc3949ce38-kube-api-access-zgjgm\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.882285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvc9f\" (UniqueName: \"kubernetes.io/projected/78d3d32d-8837-453a-8dcd-4cf0451fdb6f-kube-api-access-zvc9f\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.889380 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.889367256 podStartE2EDuration="2.889367256s" podCreationTimestamp="2026-02-19 14:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:38:59.886467308 +0000 UTC m=+5354.547570536" watchObservedRunningTime="2026-02-19 14:38:59.889367256 +0000 UTC m=+5354.550470484" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.905944 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a01c771-3acd-4edd-bd78-9366cd4cebb7\") pod \"ovsdbserver-sb-0\" (UID: \"78d3d32d-8837-453a-8dcd-4cf0451fdb6f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.908439 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-484d464d-b0ef-483e-ad26-7f641e27ab27\") pod \"ovsdbserver-sb-2\" (UID: \"a7e74a40-f854-4731-aacc-38bc3949ce38\") " pod="openstack/ovsdbserver-sb-2" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.908916 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-994d2f31-9f92-4cb3-aff4-8d2b713576ee\") pod \"ovsdbserver-sb-1\" (UID: \"c2ea5f78-24ed-484c-94fb-de46d8dfdb09\") " pod="openstack/ovsdbserver-sb-1" Feb 19 14:38:59 crc kubenswrapper[4861]: I0219 14:38:59.991898 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.013040 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.020339 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.539313 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.626208 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 14:39:00 crc kubenswrapper[4861]: W0219 14:39:00.634362 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2ea5f78_24ed_484c_94fb_de46d8dfdb09.slice/crio-a701e972725fbea833e724940ae9c43a08508c35e91fe757493b3e7c09cc6e0b WatchSource:0}: Error finding container a701e972725fbea833e724940ae9c43a08508c35e91fe757493b3e7c09cc6e0b: Status 404 returned error can't find the container with id a701e972725fbea833e724940ae9c43a08508c35e91fe757493b3e7c09cc6e0b Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.866217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c2ea5f78-24ed-484c-94fb-de46d8dfdb09","Type":"ContainerStarted","Data":"a701e972725fbea833e724940ae9c43a08508c35e91fe757493b3e7c09cc6e0b"} Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.867627 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"78d3d32d-8837-453a-8dcd-4cf0451fdb6f","Type":"ContainerStarted","Data":"dce826f19127214ba7779a374e90004bf947786be24c478810bcd4bd5277a1b1"} Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.870881 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8a7303b9-35dd-4227-b008-5f0882fcb06d","Type":"ContainerStarted","Data":"8bc1b4141c735c51d6ca21dbe0bf14b59d6f4b58a39f5425b0a8504d96be6eea"} Feb 19 14:39:00 crc kubenswrapper[4861]: I0219 14:39:00.953881 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.953772738 podStartE2EDuration="3.953772738s" podCreationTimestamp="2026-02-19 14:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:00.950416998 +0000 UTC m=+5355.611520266" watchObservedRunningTime="2026-02-19 14:39:00.953772738 +0000 UTC m=+5355.614876006" Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.708011 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.720521 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.778780 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.881461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"a7e74a40-f854-4731-aacc-38bc3949ce38","Type":"ContainerStarted","Data":"f8f6f4fbeed38d5e6b6e474253f1dd76c2a833633a79163541c8e55216af4a1c"} Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.883753 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"78d3d32d-8837-453a-8dcd-4cf0451fdb6f","Type":"ContainerStarted","Data":"7f068ac046f897fdc2aeec13dfae1b7f680a243d6549801e0abe0fd767cb6e1d"} Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.883807 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"78d3d32d-8837-453a-8dcd-4cf0451fdb6f","Type":"ContainerStarted","Data":"9dbbb13772e50cea7dd15b79b5df104066aa5209ec88038f4c55ce765e4e18b2"} Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.886926 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c2ea5f78-24ed-484c-94fb-de46d8dfdb09","Type":"ContainerStarted","Data":"40d1a6a7a27c6a6bb5400ab3982582ce7e8a11e80ce663f1c1213e1e5616d511"} Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.886951 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c2ea5f78-24ed-484c-94fb-de46d8dfdb09","Type":"ContainerStarted","Data":"8fa105b1e8dad715b5b698dd5ad8a60dcfd68298f565989b18ffc76a19d9ef2c"} Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.920343 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.920304686 podStartE2EDuration="3.920304686s" podCreationTimestamp="2026-02-19 14:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:01.912122957 +0000 UTC m=+5356.573226215" watchObservedRunningTime="2026-02-19 14:39:01.920304686 +0000 UTC m=+5356.581407954" Feb 19 14:39:01 crc kubenswrapper[4861]: I0219 14:39:01.947379 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.947360251 podStartE2EDuration="3.947360251s" podCreationTimestamp="2026-02-19 14:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:01.935225156 +0000 UTC m=+5356.596328414" watchObservedRunningTime="2026-02-19 14:39:01.947360251 +0000 UTC m=+5356.608463479" Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.050041 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.099791 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.902346 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"a7e74a40-f854-4731-aacc-38bc3949ce38","Type":"ContainerStarted","Data":"f518f156bb224cf30b0f3ea759b7ac08add7bfa6e05fbf2f3cb0454067b643e0"} Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.902415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"a7e74a40-f854-4731-aacc-38bc3949ce38","Type":"ContainerStarted","Data":"7559d1f4ccfac1111fc758680be66c612059929ca176c22522c14e9ddeddbada"} Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.903133 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.940951 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.940917604 podStartE2EDuration="4.940917604s" podCreationTimestamp="2026-02-19 14:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:02.937678147 +0000 UTC m=+5357.598781435" watchObservedRunningTime="2026-02-19 14:39:02.940917604 +0000 UTC m=+5357.602020872" Feb 19 14:39:02 crc kubenswrapper[4861]: I0219 14:39:02.992826 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 14:39:03 crc kubenswrapper[4861]: I0219 14:39:03.014375 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 19 14:39:03 crc kubenswrapper[4861]: I0219 14:39:03.020853 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 19 14:39:03 crc kubenswrapper[4861]: I0219 14:39:03.330358 4861 scope.go:117] "RemoveContainer" containerID="c45290ada5c3afcf171e042cbb0899403cb495f6e88106a9d8a87de6882797a7" Feb 19 14:39:03 crc kubenswrapper[4861]: I0219 14:39:03.719976 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 14:39:03 crc kubenswrapper[4861]: I0219 14:39:03.777258 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.100251 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.412556 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-8sfft"] Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.414303 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.416717 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.425575 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-8sfft"] Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.439237 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzsm\" (UniqueName: \"kubernetes.io/projected/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-kube-api-access-8dzsm\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.439292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-config\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.439313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-ovsdbserver-nb\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.439334 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-dns-svc\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.540393 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-config\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.540466 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-ovsdbserver-nb\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.540500 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-dns-svc\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.540646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dzsm\" (UniqueName: \"kubernetes.io/projected/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-kube-api-access-8dzsm\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.541381 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-ovsdbserver-nb\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.541401 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-config\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.542168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-dns-svc\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.570288 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dzsm\" (UniqueName: \"kubernetes.io/projected/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-kube-api-access-8dzsm\") pod \"dnsmasq-dns-697b8d7675-8sfft\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.733538 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.770530 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.861065 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.869265 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.946282 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 19 14:39:04 crc kubenswrapper[4861]: I0219 14:39:04.992140 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 14:39:05 crc kubenswrapper[4861]: I0219 14:39:05.013863 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 19 14:39:05 crc kubenswrapper[4861]: I0219 14:39:05.021145 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 19 14:39:05 crc kubenswrapper[4861]: I0219 14:39:05.298764 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-8sfft"] Feb 19 14:39:05 crc kubenswrapper[4861]: I0219 14:39:05.932594 4861 generic.go:334] "Generic (PLEG): container finished" podID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerID="6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12" exitCode=0 Feb 19 14:39:05 crc kubenswrapper[4861]: I0219 14:39:05.933799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" event={"ID":"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22","Type":"ContainerDied","Data":"6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12"} Feb 19 14:39:05 crc kubenswrapper[4861]: I0219 14:39:05.933829 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" event={"ID":"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22","Type":"ContainerStarted","Data":"c95d4c0db06bfc123a54bf11599521335a7d78ba88c956cb4a9b06ed06788555"} Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.057436 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.071263 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.072060 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.112199 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.122145 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.442513 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-8sfft"] Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.462337 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c6db6cd4c-6z77x"] Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.463550 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.467932 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.487389 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c6db6cd4c-6z77x"] Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.574704 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.574957 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-dns-svc\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.575057 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.575161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-config\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.575296 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rzpk\" (UniqueName: \"kubernetes.io/projected/a33c5e67-d791-4ea2-ae98-0e74eafa506c-kube-api-access-9rzpk\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.676917 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-dns-svc\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.676969 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.676992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-config\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.677064 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rzpk\" (UniqueName: \"kubernetes.io/projected/a33c5e67-d791-4ea2-ae98-0e74eafa506c-kube-api-access-9rzpk\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.677107 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.677922 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.677917 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-dns-svc\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.678011 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-config\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.678191 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.704235 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rzpk\" (UniqueName: \"kubernetes.io/projected/a33c5e67-d791-4ea2-ae98-0e74eafa506c-kube-api-access-9rzpk\") pod \"dnsmasq-dns-6c6db6cd4c-6z77x\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.778002 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.949032 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" event={"ID":"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22","Type":"ContainerStarted","Data":"469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04"} Feb 19 14:39:06 crc kubenswrapper[4861]: I0219 14:39:06.982763 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" podStartSLOduration=2.982733965 podStartE2EDuration="2.982733965s" podCreationTimestamp="2026-02-19 14:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:06.982077678 +0000 UTC m=+5361.643180966" watchObservedRunningTime="2026-02-19 14:39:06.982733965 +0000 UTC m=+5361.643837233" Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.016736 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.301869 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c6db6cd4c-6z77x"] Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.958244 4861 generic.go:334] "Generic (PLEG): container finished" podID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerID="5952272a08e7057591c61c984ef6b53c92780e0030899338b5949ee52f12e9a7" exitCode=0 Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.958339 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" event={"ID":"a33c5e67-d791-4ea2-ae98-0e74eafa506c","Type":"ContainerDied","Data":"5952272a08e7057591c61c984ef6b53c92780e0030899338b5949ee52f12e9a7"} Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.958661 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" event={"ID":"a33c5e67-d791-4ea2-ae98-0e74eafa506c","Type":"ContainerStarted","Data":"e8d5c32646579f68d46685ac7d55b1e225390fa091b59683305e7de745276805"} Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.958809 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerName="dnsmasq-dns" containerID="cri-o://469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04" gracePeriod=10 Feb 19 14:39:07 crc kubenswrapper[4861]: I0219 14:39:07.958845 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.355480 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.507793 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dzsm\" (UniqueName: \"kubernetes.io/projected/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-kube-api-access-8dzsm\") pod \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.508145 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-config\") pod \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.508265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-ovsdbserver-nb\") pod \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.508345 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-dns-svc\") pod \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\" (UID: \"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22\") " Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.513151 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-kube-api-access-8dzsm" (OuterVolumeSpecName: "kube-api-access-8dzsm") pod "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" (UID: "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22"). InnerVolumeSpecName "kube-api-access-8dzsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.545047 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" (UID: "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.549456 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-config" (OuterVolumeSpecName: "config") pod "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" (UID: "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.561190 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" (UID: "baaacefc-e096-47dd-a2ce-4d1fd7c7bb22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.610336 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.610367 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.610377 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dzsm\" (UniqueName: \"kubernetes.io/projected/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-kube-api-access-8dzsm\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.610387 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.973973 4861 generic.go:334] "Generic (PLEG): container finished" podID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerID="469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04" exitCode=0 Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.974126 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.974152 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" event={"ID":"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22","Type":"ContainerDied","Data":"469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04"} Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.974185 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b8d7675-8sfft" event={"ID":"baaacefc-e096-47dd-a2ce-4d1fd7c7bb22","Type":"ContainerDied","Data":"c95d4c0db06bfc123a54bf11599521335a7d78ba88c956cb4a9b06ed06788555"} Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.974208 4861 scope.go:117] "RemoveContainer" containerID="469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.979220 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:39:08 crc kubenswrapper[4861]: E0219 14:39:08.979751 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.979924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" event={"ID":"a33c5e67-d791-4ea2-ae98-0e74eafa506c","Type":"ContainerStarted","Data":"da98c3a6360e56db7a5f04612be4278c9016356ab0bb08356f78d467d226f550"} Feb 19 14:39:08 crc kubenswrapper[4861]: I0219 14:39:08.980147 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.020955 4861 scope.go:117] "RemoveContainer" containerID="6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.043278 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" podStartSLOduration=3.043245688 podStartE2EDuration="3.043245688s" podCreationTimestamp="2026-02-19 14:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:09.002853115 +0000 UTC m=+5363.663956353" watchObservedRunningTime="2026-02-19 14:39:09.043245688 +0000 UTC m=+5363.704348946" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.048464 4861 scope.go:117] "RemoveContainer" containerID="469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04" Feb 19 14:39:09 crc kubenswrapper[4861]: E0219 14:39:09.049043 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04\": container with ID starting with 469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04 not found: ID does not exist" containerID="469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.049082 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04"} err="failed to get container status \"469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04\": rpc error: code = NotFound desc = could not find container \"469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04\": container with ID starting with 469a2e47e1a79aaff02130c71f11782bf261c2c0aee4a012a8a7cdec05257c04 not found: ID does not exist" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.049123 4861 scope.go:117] "RemoveContainer" containerID="6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12" Feb 19 14:39:09 crc kubenswrapper[4861]: E0219 14:39:09.050309 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12\": container with ID starting with 6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12 not found: ID does not exist" containerID="6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.050367 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12"} err="failed to get container status \"6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12\": rpc error: code = NotFound desc = could not find container \"6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12\": container with ID starting with 6b3606d3d0f96641e3ab156b3941dccd4b402ecd092c2221e50738949fd47b12 not found: ID does not exist" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.061681 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-8sfft"] Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.070152 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697b8d7675-8sfft"] Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.291691 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 19 14:39:09 crc kubenswrapper[4861]: E0219 14:39:09.292787 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerName="init" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.293002 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerName="init" Feb 19 14:39:09 crc kubenswrapper[4861]: E0219 14:39:09.293223 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerName="dnsmasq-dns" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.293356 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerName="dnsmasq-dns" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.293927 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" containerName="dnsmasq-dns" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.295039 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.298542 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.298564 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.441160 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s66k\" (UniqueName: \"kubernetes.io/projected/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-kube-api-access-5s66k\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.441350 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.441443 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.542901 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s66k\" (UniqueName: \"kubernetes.io/projected/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-kube-api-access-5s66k\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.543052 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.543118 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.547525 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.548502 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.548566 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/45d0b5d16540d4fb807c6a819146f56bca73e8b07e908826518fa69e22672692/globalmount\"" pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.580980 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s66k\" (UniqueName: \"kubernetes.io/projected/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-kube-api-access-5s66k\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.594663 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") pod \"ovn-copy-data\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.628841 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 14:39:09 crc kubenswrapper[4861]: I0219 14:39:09.992749 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baaacefc-e096-47dd-a2ce-4d1fd7c7bb22" path="/var/lib/kubelet/pods/baaacefc-e096-47dd-a2ce-4d1fd7c7bb22/volumes" Feb 19 14:39:10 crc kubenswrapper[4861]: I0219 14:39:10.215333 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 14:39:10 crc kubenswrapper[4861]: I0219 14:39:10.230490 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:39:11 crc kubenswrapper[4861]: I0219 14:39:11.008508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3adbde3b-2980-403c-a7c5-87b1fd3f6d85","Type":"ContainerStarted","Data":"d664882b56b3f19acfacd90b6a7f183cf837c4380c268cc4f733cdaa553bdb45"} Feb 19 14:39:12 crc kubenswrapper[4861]: I0219 14:39:12.021377 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3adbde3b-2980-403c-a7c5-87b1fd3f6d85","Type":"ContainerStarted","Data":"7ca65f2afa16e9d13bc5dc86dbd4cf77d45af9a55e0e3fe4c2f860e57b53e73c"} Feb 19 14:39:12 crc kubenswrapper[4861]: I0219 14:39:12.052252 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.463484077 podStartE2EDuration="4.052231184s" podCreationTimestamp="2026-02-19 14:39:08 +0000 UTC" firstStartedPulling="2026-02-19 14:39:10.230118894 +0000 UTC m=+5364.891222162" lastFinishedPulling="2026-02-19 14:39:10.818866001 +0000 UTC m=+5365.479969269" observedRunningTime="2026-02-19 14:39:12.045031321 +0000 UTC m=+5366.706134579" watchObservedRunningTime="2026-02-19 14:39:12.052231184 +0000 UTC m=+5366.713334422" Feb 19 14:39:16 crc kubenswrapper[4861]: I0219 14:39:16.779723 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:16 crc kubenswrapper[4861]: I0219 14:39:16.869624 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-kpx92"] Feb 19 14:39:16 crc kubenswrapper[4861]: I0219 14:39:16.869989 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerName="dnsmasq-dns" containerID="cri-o://6737d4f979f22de964de4ff8043abf584cc250047f2ffb9e8da565bfc46f9561" gracePeriod=10 Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.081394 4861 generic.go:334] "Generic (PLEG): container finished" podID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerID="6737d4f979f22de964de4ff8043abf584cc250047f2ffb9e8da565bfc46f9561" exitCode=0 Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.081460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" event={"ID":"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c","Type":"ContainerDied","Data":"6737d4f979f22de964de4ff8043abf584cc250047f2ffb9e8da565bfc46f9561"} Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.411770 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.413494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.415946 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.416128 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.418684 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.419408 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.419469 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p54q7" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.429202 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.525141 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-config\") pod \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.525569 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-dns-svc\") pod \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.525756 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg9r4\" (UniqueName: \"kubernetes.io/projected/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-kube-api-access-fg9r4\") pod \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\" (UID: \"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c\") " Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.526520 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aaf2d3-6d27-4808-88b7-fd79f1361924-config\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.526705 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsgt\" (UniqueName: \"kubernetes.io/projected/f2aaf2d3-6d27-4808-88b7-fd79f1361924-kube-api-access-hfsgt\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.526812 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.526920 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.527390 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2aaf2d3-6d27-4808-88b7-fd79f1361924-scripts\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.527549 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2aaf2d3-6d27-4808-88b7-fd79f1361924-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.527684 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.554701 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-kube-api-access-fg9r4" (OuterVolumeSpecName: "kube-api-access-fg9r4") pod "bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" (UID: "bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c"). InnerVolumeSpecName "kube-api-access-fg9r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.566610 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-config" (OuterVolumeSpecName: "config") pod "bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" (UID: "bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.569275 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" (UID: "bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.629908 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2aaf2d3-6d27-4808-88b7-fd79f1361924-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.630631 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2aaf2d3-6d27-4808-88b7-fd79f1361924-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.630606 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2aaf2d3-6d27-4808-88b7-fd79f1361924-scripts\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.631482 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.631745 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aaf2d3-6d27-4808-88b7-fd79f1361924-config\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.632148 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfsgt\" (UniqueName: \"kubernetes.io/projected/f2aaf2d3-6d27-4808-88b7-fd79f1361924-kube-api-access-hfsgt\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.632347 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.632597 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.632906 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.633085 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.633248 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg9r4\" (UniqueName: \"kubernetes.io/projected/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c-kube-api-access-fg9r4\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.634707 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aaf2d3-6d27-4808-88b7-fd79f1361924-config\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.636230 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2aaf2d3-6d27-4808-88b7-fd79f1361924-scripts\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.636760 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.638958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.640246 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aaf2d3-6d27-4808-88b7-fd79f1361924-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.648603 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfsgt\" (UniqueName: \"kubernetes.io/projected/f2aaf2d3-6d27-4808-88b7-fd79f1361924-kube-api-access-hfsgt\") pod \"ovn-northd-0\" (UID: \"f2aaf2d3-6d27-4808-88b7-fd79f1361924\") " pod="openstack/ovn-northd-0" Feb 19 14:39:17 crc kubenswrapper[4861]: I0219 14:39:17.733539 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.092165 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" event={"ID":"bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c","Type":"ContainerDied","Data":"f662e7ecc12fb4bc9869e77212ab18d3232c62934da0b20d99c0b6ee28bc8fd4"} Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.095095 4861 scope.go:117] "RemoveContainer" containerID="6737d4f979f22de964de4ff8043abf584cc250047f2ffb9e8da565bfc46f9561" Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.093519 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-kpx92" Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.122677 4861 scope.go:117] "RemoveContainer" containerID="1683b3b0813a65a16b6d3e3b0923bdd441cff0fde33ab158846432ca2d0f7f8d" Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.126215 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-kpx92"] Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.135800 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-kpx92"] Feb 19 14:39:18 crc kubenswrapper[4861]: I0219 14:39:18.308904 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 14:39:19 crc kubenswrapper[4861]: I0219 14:39:19.102103 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f2aaf2d3-6d27-4808-88b7-fd79f1361924","Type":"ContainerStarted","Data":"c380029019416b2908d5a9cfb1ba66615d5ce9fa38865c78881069921554d81c"} Feb 19 14:39:19 crc kubenswrapper[4861]: I0219 14:39:19.103615 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 14:39:19 crc kubenswrapper[4861]: I0219 14:39:19.103682 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f2aaf2d3-6d27-4808-88b7-fd79f1361924","Type":"ContainerStarted","Data":"d4a50e9561a626211a7aa168c226d386cb7c3e354d6a8e0d61e53ad8824e322e"} Feb 19 14:39:19 crc kubenswrapper[4861]: I0219 14:39:19.103712 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f2aaf2d3-6d27-4808-88b7-fd79f1361924","Type":"ContainerStarted","Data":"efee5cd56f632466f9b7b715ce2a67ac8d03297c618579514b57772716a077c1"} Feb 19 14:39:19 crc kubenswrapper[4861]: I0219 14:39:19.995577 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" path="/var/lib/kubelet/pods/bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c/volumes" Feb 19 14:39:21 crc kubenswrapper[4861]: I0219 14:39:21.976959 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:39:21 crc kubenswrapper[4861]: E0219 14:39:21.977558 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.501936 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.501917212 podStartE2EDuration="5.501917212s" podCreationTimestamp="2026-02-19 14:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:19.123285774 +0000 UTC m=+5373.784389022" watchObservedRunningTime="2026-02-19 14:39:22.501917212 +0000 UTC m=+5377.163020450" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.509560 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pcpkc"] Feb 19 14:39:22 crc kubenswrapper[4861]: E0219 14:39:22.509976 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerName="dnsmasq-dns" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.510002 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerName="dnsmasq-dns" Feb 19 14:39:22 crc kubenswrapper[4861]: E0219 14:39:22.510032 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerName="init" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.510041 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerName="init" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.510238 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd4f0bf-b6e5-46d8-b707-46b2ccf0be9c" containerName="dnsmasq-dns" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.510865 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.529214 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79bh\" (UniqueName: \"kubernetes.io/projected/d1b96a9f-15a7-43be-b323-7784fae3b57f-kube-api-access-f79bh\") pod \"keystone-db-create-pcpkc\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.529519 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b96a9f-15a7-43be-b323-7784fae3b57f-operator-scripts\") pod \"keystone-db-create-pcpkc\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.531811 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e543-account-create-update-9xtc6"] Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.533046 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.535083 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.539882 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pcpkc"] Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.550446 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e543-account-create-update-9xtc6"] Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.630975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b96a9f-15a7-43be-b323-7784fae3b57f-operator-scripts\") pod \"keystone-db-create-pcpkc\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.631645 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvj2g\" (UniqueName: \"kubernetes.io/projected/88155c43-9d6e-4602-a77e-43a05fd5f3a3-kube-api-access-gvj2g\") pod \"keystone-e543-account-create-update-9xtc6\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.632092 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88155c43-9d6e-4602-a77e-43a05fd5f3a3-operator-scripts\") pod \"keystone-e543-account-create-update-9xtc6\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.632109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b96a9f-15a7-43be-b323-7784fae3b57f-operator-scripts\") pod \"keystone-db-create-pcpkc\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.632215 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f79bh\" (UniqueName: \"kubernetes.io/projected/d1b96a9f-15a7-43be-b323-7784fae3b57f-kube-api-access-f79bh\") pod \"keystone-db-create-pcpkc\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.662050 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f79bh\" (UniqueName: \"kubernetes.io/projected/d1b96a9f-15a7-43be-b323-7784fae3b57f-kube-api-access-f79bh\") pod \"keystone-db-create-pcpkc\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.734887 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvj2g\" (UniqueName: \"kubernetes.io/projected/88155c43-9d6e-4602-a77e-43a05fd5f3a3-kube-api-access-gvj2g\") pod \"keystone-e543-account-create-update-9xtc6\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.734975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88155c43-9d6e-4602-a77e-43a05fd5f3a3-operator-scripts\") pod \"keystone-e543-account-create-update-9xtc6\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.735851 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88155c43-9d6e-4602-a77e-43a05fd5f3a3-operator-scripts\") pod \"keystone-e543-account-create-update-9xtc6\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.771163 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvj2g\" (UniqueName: \"kubernetes.io/projected/88155c43-9d6e-4602-a77e-43a05fd5f3a3-kube-api-access-gvj2g\") pod \"keystone-e543-account-create-update-9xtc6\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.828819 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:22 crc kubenswrapper[4861]: I0219 14:39:22.846283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:23 crc kubenswrapper[4861]: I0219 14:39:23.322156 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e543-account-create-update-9xtc6"] Feb 19 14:39:23 crc kubenswrapper[4861]: W0219 14:39:23.325497 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88155c43_9d6e_4602_a77e_43a05fd5f3a3.slice/crio-7d3f4999733b6af0473a2acdc3e68e734135ed149fd61456a43311214863801b WatchSource:0}: Error finding container 7d3f4999733b6af0473a2acdc3e68e734135ed149fd61456a43311214863801b: Status 404 returned error can't find the container with id 7d3f4999733b6af0473a2acdc3e68e734135ed149fd61456a43311214863801b Feb 19 14:39:23 crc kubenswrapper[4861]: I0219 14:39:23.389802 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pcpkc"] Feb 19 14:39:23 crc kubenswrapper[4861]: W0219 14:39:23.411055 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1b96a9f_15a7_43be_b323_7784fae3b57f.slice/crio-d1588e5e90a77e0822c18ce69a46758afb5fbb742257f7033a951086f3d6c8e0 WatchSource:0}: Error finding container d1588e5e90a77e0822c18ce69a46758afb5fbb742257f7033a951086f3d6c8e0: Status 404 returned error can't find the container with id d1588e5e90a77e0822c18ce69a46758afb5fbb742257f7033a951086f3d6c8e0 Feb 19 14:39:24 crc kubenswrapper[4861]: I0219 14:39:24.170779 4861 generic.go:334] "Generic (PLEG): container finished" podID="88155c43-9d6e-4602-a77e-43a05fd5f3a3" containerID="a4828d12f64ebd2d63693b3998d910bbf679448da5a1d146e13f10bee22e5aa2" exitCode=0 Feb 19 14:39:24 crc kubenswrapper[4861]: I0219 14:39:24.170886 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e543-account-create-update-9xtc6" event={"ID":"88155c43-9d6e-4602-a77e-43a05fd5f3a3","Type":"ContainerDied","Data":"a4828d12f64ebd2d63693b3998d910bbf679448da5a1d146e13f10bee22e5aa2"} Feb 19 14:39:24 crc kubenswrapper[4861]: I0219 14:39:24.171131 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e543-account-create-update-9xtc6" event={"ID":"88155c43-9d6e-4602-a77e-43a05fd5f3a3","Type":"ContainerStarted","Data":"7d3f4999733b6af0473a2acdc3e68e734135ed149fd61456a43311214863801b"} Feb 19 14:39:24 crc kubenswrapper[4861]: I0219 14:39:24.178499 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1b96a9f-15a7-43be-b323-7784fae3b57f" containerID="9814a8456c638cbc3e80b966402a00f1f80442602b0da4e19b45f2cd1ef4f719" exitCode=0 Feb 19 14:39:24 crc kubenswrapper[4861]: I0219 14:39:24.178562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pcpkc" event={"ID":"d1b96a9f-15a7-43be-b323-7784fae3b57f","Type":"ContainerDied","Data":"9814a8456c638cbc3e80b966402a00f1f80442602b0da4e19b45f2cd1ef4f719"} Feb 19 14:39:24 crc kubenswrapper[4861]: I0219 14:39:24.178593 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pcpkc" event={"ID":"d1b96a9f-15a7-43be-b323-7784fae3b57f","Type":"ContainerStarted","Data":"d1588e5e90a77e0822c18ce69a46758afb5fbb742257f7033a951086f3d6c8e0"} Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.633525 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.710528 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.793301 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvj2g\" (UniqueName: \"kubernetes.io/projected/88155c43-9d6e-4602-a77e-43a05fd5f3a3-kube-api-access-gvj2g\") pod \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.793600 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88155c43-9d6e-4602-a77e-43a05fd5f3a3-operator-scripts\") pod \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\" (UID: \"88155c43-9d6e-4602-a77e-43a05fd5f3a3\") " Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.794984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88155c43-9d6e-4602-a77e-43a05fd5f3a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88155c43-9d6e-4602-a77e-43a05fd5f3a3" (UID: "88155c43-9d6e-4602-a77e-43a05fd5f3a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.800727 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88155c43-9d6e-4602-a77e-43a05fd5f3a3-kube-api-access-gvj2g" (OuterVolumeSpecName: "kube-api-access-gvj2g") pod "88155c43-9d6e-4602-a77e-43a05fd5f3a3" (UID: "88155c43-9d6e-4602-a77e-43a05fd5f3a3"). InnerVolumeSpecName "kube-api-access-gvj2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.896226 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f79bh\" (UniqueName: \"kubernetes.io/projected/d1b96a9f-15a7-43be-b323-7784fae3b57f-kube-api-access-f79bh\") pod \"d1b96a9f-15a7-43be-b323-7784fae3b57f\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.897645 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b96a9f-15a7-43be-b323-7784fae3b57f-operator-scripts\") pod \"d1b96a9f-15a7-43be-b323-7784fae3b57f\" (UID: \"d1b96a9f-15a7-43be-b323-7784fae3b57f\") " Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.898243 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b96a9f-15a7-43be-b323-7784fae3b57f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1b96a9f-15a7-43be-b323-7784fae3b57f" (UID: "d1b96a9f-15a7-43be-b323-7784fae3b57f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.899572 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88155c43-9d6e-4602-a77e-43a05fd5f3a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.899687 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvj2g\" (UniqueName: \"kubernetes.io/projected/88155c43-9d6e-4602-a77e-43a05fd5f3a3-kube-api-access-gvj2g\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.899787 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b96a9f-15a7-43be-b323-7784fae3b57f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:25 crc kubenswrapper[4861]: I0219 14:39:25.901175 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b96a9f-15a7-43be-b323-7784fae3b57f-kube-api-access-f79bh" (OuterVolumeSpecName: "kube-api-access-f79bh") pod "d1b96a9f-15a7-43be-b323-7784fae3b57f" (UID: "d1b96a9f-15a7-43be-b323-7784fae3b57f"). InnerVolumeSpecName "kube-api-access-f79bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.001553 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f79bh\" (UniqueName: \"kubernetes.io/projected/d1b96a9f-15a7-43be-b323-7784fae3b57f-kube-api-access-f79bh\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.200191 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pcpkc" event={"ID":"d1b96a9f-15a7-43be-b323-7784fae3b57f","Type":"ContainerDied","Data":"d1588e5e90a77e0822c18ce69a46758afb5fbb742257f7033a951086f3d6c8e0"} Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.200252 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1588e5e90a77e0822c18ce69a46758afb5fbb742257f7033a951086f3d6c8e0" Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.200222 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pcpkc" Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.203062 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e543-account-create-update-9xtc6" event={"ID":"88155c43-9d6e-4602-a77e-43a05fd5f3a3","Type":"ContainerDied","Data":"7d3f4999733b6af0473a2acdc3e68e734135ed149fd61456a43311214863801b"} Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.203129 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e543-account-create-update-9xtc6" Feb 19 14:39:26 crc kubenswrapper[4861]: I0219 14:39:26.203140 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3f4999733b6af0473a2acdc3e68e734135ed149fd61456a43311214863801b" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.015167 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8rddw"] Feb 19 14:39:28 crc kubenswrapper[4861]: E0219 14:39:28.015842 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b96a9f-15a7-43be-b323-7784fae3b57f" containerName="mariadb-database-create" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.015856 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b96a9f-15a7-43be-b323-7784fae3b57f" containerName="mariadb-database-create" Feb 19 14:39:28 crc kubenswrapper[4861]: E0219 14:39:28.015890 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88155c43-9d6e-4602-a77e-43a05fd5f3a3" containerName="mariadb-account-create-update" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.015901 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="88155c43-9d6e-4602-a77e-43a05fd5f3a3" containerName="mariadb-account-create-update" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.016121 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b96a9f-15a7-43be-b323-7784fae3b57f" containerName="mariadb-database-create" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.016140 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="88155c43-9d6e-4602-a77e-43a05fd5f3a3" containerName="mariadb-account-create-update" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.016877 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.018983 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.020710 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2krkf" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.020947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.021279 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.032497 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8rddw"] Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.136819 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-config-data\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.136940 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprth\" (UniqueName: \"kubernetes.io/projected/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-kube-api-access-xprth\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.137086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-combined-ca-bundle\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.238726 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xprth\" (UniqueName: \"kubernetes.io/projected/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-kube-api-access-xprth\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.238813 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-combined-ca-bundle\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.238896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-config-data\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.242746 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-config-data\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.244237 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-combined-ca-bundle\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.263455 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xprth\" (UniqueName: \"kubernetes.io/projected/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-kube-api-access-xprth\") pod \"keystone-db-sync-8rddw\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.342982 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:28 crc kubenswrapper[4861]: I0219 14:39:28.818857 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8rddw"] Feb 19 14:39:28 crc kubenswrapper[4861]: W0219 14:39:28.818897 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode903ee6f_b6d4_4b6f_9487_56a87b1c444a.slice/crio-02766f89bce2d5e4e0d0753a77af9dc3e4ef83151a6d483387115d3ea933b576 WatchSource:0}: Error finding container 02766f89bce2d5e4e0d0753a77af9dc3e4ef83151a6d483387115d3ea933b576: Status 404 returned error can't find the container with id 02766f89bce2d5e4e0d0753a77af9dc3e4ef83151a6d483387115d3ea933b576 Feb 19 14:39:29 crc kubenswrapper[4861]: I0219 14:39:29.234165 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8rddw" event={"ID":"e903ee6f-b6d4-4b6f-9487-56a87b1c444a","Type":"ContainerStarted","Data":"d3d208b9fd8c961f3ae6bc8dcfab32e93884363fed8b65ce4e16c984cc002cd9"} Feb 19 14:39:29 crc kubenswrapper[4861]: I0219 14:39:29.234407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8rddw" event={"ID":"e903ee6f-b6d4-4b6f-9487-56a87b1c444a","Type":"ContainerStarted","Data":"02766f89bce2d5e4e0d0753a77af9dc3e4ef83151a6d483387115d3ea933b576"} Feb 19 14:39:29 crc kubenswrapper[4861]: I0219 14:39:29.257245 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8rddw" podStartSLOduration=2.257228506 podStartE2EDuration="2.257228506s" podCreationTimestamp="2026-02-19 14:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:29.255294095 +0000 UTC m=+5383.916397323" watchObservedRunningTime="2026-02-19 14:39:29.257228506 +0000 UTC m=+5383.918331734" Feb 19 14:39:31 crc kubenswrapper[4861]: I0219 14:39:31.260650 4861 generic.go:334] "Generic (PLEG): container finished" podID="e903ee6f-b6d4-4b6f-9487-56a87b1c444a" containerID="d3d208b9fd8c961f3ae6bc8dcfab32e93884363fed8b65ce4e16c984cc002cd9" exitCode=0 Feb 19 14:39:31 crc kubenswrapper[4861]: I0219 14:39:31.260766 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8rddw" event={"ID":"e903ee6f-b6d4-4b6f-9487-56a87b1c444a","Type":"ContainerDied","Data":"d3d208b9fd8c961f3ae6bc8dcfab32e93884363fed8b65ce4e16c984cc002cd9"} Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.621522 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.720734 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-config-data\") pod \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.720873 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-combined-ca-bundle\") pod \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.721001 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xprth\" (UniqueName: \"kubernetes.io/projected/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-kube-api-access-xprth\") pod \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\" (UID: \"e903ee6f-b6d4-4b6f-9487-56a87b1c444a\") " Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.750633 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-kube-api-access-xprth" (OuterVolumeSpecName: "kube-api-access-xprth") pod "e903ee6f-b6d4-4b6f-9487-56a87b1c444a" (UID: "e903ee6f-b6d4-4b6f-9487-56a87b1c444a"). InnerVolumeSpecName "kube-api-access-xprth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.787901 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e903ee6f-b6d4-4b6f-9487-56a87b1c444a" (UID: "e903ee6f-b6d4-4b6f-9487-56a87b1c444a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.794124 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-config-data" (OuterVolumeSpecName: "config-data") pod "e903ee6f-b6d4-4b6f-9487-56a87b1c444a" (UID: "e903ee6f-b6d4-4b6f-9487-56a87b1c444a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.823224 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.823259 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:32 crc kubenswrapper[4861]: I0219 14:39:32.823272 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xprth\" (UniqueName: \"kubernetes.io/projected/e903ee6f-b6d4-4b6f-9487-56a87b1c444a-kube-api-access-xprth\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.280944 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8rddw" event={"ID":"e903ee6f-b6d4-4b6f-9487-56a87b1c444a","Type":"ContainerDied","Data":"02766f89bce2d5e4e0d0753a77af9dc3e4ef83151a6d483387115d3ea933b576"} Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.281001 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02766f89bce2d5e4e0d0753a77af9dc3e4ef83151a6d483387115d3ea933b576" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.281034 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8rddw" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.590616 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f8c84669-4wrkd"] Feb 19 14:39:33 crc kubenswrapper[4861]: E0219 14:39:33.590968 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e903ee6f-b6d4-4b6f-9487-56a87b1c444a" containerName="keystone-db-sync" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.590983 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e903ee6f-b6d4-4b6f-9487-56a87b1c444a" containerName="keystone-db-sync" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.591118 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e903ee6f-b6d4-4b6f-9487-56a87b1c444a" containerName="keystone-db-sync" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.591951 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.600224 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zklr2"] Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.601178 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.605930 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.606173 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.606280 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.606400 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.606525 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2krkf" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.614436 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8c84669-4wrkd"] Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.623365 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zklr2"] Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.758915 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759216 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-dns-svc\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-combined-ca-bundle\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759290 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-fernet-keys\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759314 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-config-data\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759366 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-credential-keys\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759446 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-config\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759470 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-scripts\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759518 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xtgf\" (UniqueName: \"kubernetes.io/projected/b6cd929e-ff2c-446d-a11f-b229278b55f9-kube-api-access-6xtgf\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.759545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6nt\" (UniqueName: \"kubernetes.io/projected/9f7823e3-80fc-487d-9ed4-4a208f77b028-kube-api-access-9v6nt\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860562 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-combined-ca-bundle\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860610 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-fernet-keys\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860633 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-config-data\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860677 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-credential-keys\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860726 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-config\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860748 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-scripts\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860786 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xtgf\" (UniqueName: \"kubernetes.io/projected/b6cd929e-ff2c-446d-a11f-b229278b55f9-kube-api-access-6xtgf\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6nt\" (UniqueName: \"kubernetes.io/projected/9f7823e3-80fc-487d-9ed4-4a208f77b028-kube-api-access-9v6nt\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860861 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.860882 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-dns-svc\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.862006 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.862692 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.862697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-config\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.863105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-dns-svc\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.866195 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-combined-ca-bundle\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.873998 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-credential-keys\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.874553 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-fernet-keys\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.875682 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-scripts\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.875849 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-config-data\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.877994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6nt\" (UniqueName: \"kubernetes.io/projected/9f7823e3-80fc-487d-9ed4-4a208f77b028-kube-api-access-9v6nt\") pod \"keystone-bootstrap-zklr2\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.879655 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xtgf\" (UniqueName: \"kubernetes.io/projected/b6cd929e-ff2c-446d-a11f-b229278b55f9-kube-api-access-6xtgf\") pod \"dnsmasq-dns-7f8c84669-4wrkd\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.913179 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.923985 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:33 crc kubenswrapper[4861]: I0219 14:39:33.980178 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:39:33 crc kubenswrapper[4861]: E0219 14:39:33.980408 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:39:34 crc kubenswrapper[4861]: I0219 14:39:34.397584 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8c84669-4wrkd"] Feb 19 14:39:34 crc kubenswrapper[4861]: W0219 14:39:34.401866 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cd929e_ff2c_446d_a11f_b229278b55f9.slice/crio-2d3a7cc6466803eeec3f5ba7e25530eb56038d86481a5fd628a39f52ecc6e563 WatchSource:0}: Error finding container 2d3a7cc6466803eeec3f5ba7e25530eb56038d86481a5fd628a39f52ecc6e563: Status 404 returned error can't find the container with id 2d3a7cc6466803eeec3f5ba7e25530eb56038d86481a5fd628a39f52ecc6e563 Feb 19 14:39:34 crc kubenswrapper[4861]: I0219 14:39:34.455256 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zklr2"] Feb 19 14:39:34 crc kubenswrapper[4861]: W0219 14:39:34.458926 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7823e3_80fc_487d_9ed4_4a208f77b028.slice/crio-f27067762c95410283350f2c697774e61a9132546b5b69613c8a33ecebd17777 WatchSource:0}: Error finding container f27067762c95410283350f2c697774e61a9132546b5b69613c8a33ecebd17777: Status 404 returned error can't find the container with id f27067762c95410283350f2c697774e61a9132546b5b69613c8a33ecebd17777 Feb 19 14:39:35 crc kubenswrapper[4861]: I0219 14:39:35.295284 4861 generic.go:334] "Generic (PLEG): container finished" podID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerID="704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d" exitCode=0 Feb 19 14:39:35 crc kubenswrapper[4861]: I0219 14:39:35.295378 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" event={"ID":"b6cd929e-ff2c-446d-a11f-b229278b55f9","Type":"ContainerDied","Data":"704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d"} Feb 19 14:39:35 crc kubenswrapper[4861]: I0219 14:39:35.295791 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" event={"ID":"b6cd929e-ff2c-446d-a11f-b229278b55f9","Type":"ContainerStarted","Data":"2d3a7cc6466803eeec3f5ba7e25530eb56038d86481a5fd628a39f52ecc6e563"} Feb 19 14:39:35 crc kubenswrapper[4861]: I0219 14:39:35.297207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zklr2" event={"ID":"9f7823e3-80fc-487d-9ed4-4a208f77b028","Type":"ContainerStarted","Data":"26e1ec544c1003d5dfc3cb8e5c17e9edc65a625143a299d4552fc87ac9c4b6d9"} Feb 19 14:39:35 crc kubenswrapper[4861]: I0219 14:39:35.297279 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zklr2" event={"ID":"9f7823e3-80fc-487d-9ed4-4a208f77b028","Type":"ContainerStarted","Data":"f27067762c95410283350f2c697774e61a9132546b5b69613c8a33ecebd17777"} Feb 19 14:39:35 crc kubenswrapper[4861]: I0219 14:39:35.415604 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zklr2" podStartSLOduration=2.415573142 podStartE2EDuration="2.415573142s" podCreationTimestamp="2026-02-19 14:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:35.398542006 +0000 UTC m=+5390.059645234" watchObservedRunningTime="2026-02-19 14:39:35.415573142 +0000 UTC m=+5390.076676370" Feb 19 14:39:36 crc kubenswrapper[4861]: I0219 14:39:36.313669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" event={"ID":"b6cd929e-ff2c-446d-a11f-b229278b55f9","Type":"ContainerStarted","Data":"fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87"} Feb 19 14:39:36 crc kubenswrapper[4861]: I0219 14:39:36.314061 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:36 crc kubenswrapper[4861]: I0219 14:39:36.339718 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" podStartSLOduration=3.339684902 podStartE2EDuration="3.339684902s" podCreationTimestamp="2026-02-19 14:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:36.336314542 +0000 UTC m=+5390.997417850" watchObservedRunningTime="2026-02-19 14:39:36.339684902 +0000 UTC m=+5391.000788170" Feb 19 14:39:37 crc kubenswrapper[4861]: I0219 14:39:37.826175 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 14:39:38 crc kubenswrapper[4861]: I0219 14:39:38.332910 4861 generic.go:334] "Generic (PLEG): container finished" podID="9f7823e3-80fc-487d-9ed4-4a208f77b028" containerID="26e1ec544c1003d5dfc3cb8e5c17e9edc65a625143a299d4552fc87ac9c4b6d9" exitCode=0 Feb 19 14:39:38 crc kubenswrapper[4861]: I0219 14:39:38.333020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zklr2" event={"ID":"9f7823e3-80fc-487d-9ed4-4a208f77b028","Type":"ContainerDied","Data":"26e1ec544c1003d5dfc3cb8e5c17e9edc65a625143a299d4552fc87ac9c4b6d9"} Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.728665 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.770197 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-config-data\") pod \"9f7823e3-80fc-487d-9ed4-4a208f77b028\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.770747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-credential-keys\") pod \"9f7823e3-80fc-487d-9ed4-4a208f77b028\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.770808 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-scripts\") pod \"9f7823e3-80fc-487d-9ed4-4a208f77b028\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.770865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-combined-ca-bundle\") pod \"9f7823e3-80fc-487d-9ed4-4a208f77b028\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.770903 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v6nt\" (UniqueName: \"kubernetes.io/projected/9f7823e3-80fc-487d-9ed4-4a208f77b028-kube-api-access-9v6nt\") pod \"9f7823e3-80fc-487d-9ed4-4a208f77b028\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.771003 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-fernet-keys\") pod \"9f7823e3-80fc-487d-9ed4-4a208f77b028\" (UID: \"9f7823e3-80fc-487d-9ed4-4a208f77b028\") " Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.778643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-scripts" (OuterVolumeSpecName: "scripts") pod "9f7823e3-80fc-487d-9ed4-4a208f77b028" (UID: "9f7823e3-80fc-487d-9ed4-4a208f77b028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.779345 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9f7823e3-80fc-487d-9ed4-4a208f77b028" (UID: "9f7823e3-80fc-487d-9ed4-4a208f77b028"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.782630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9f7823e3-80fc-487d-9ed4-4a208f77b028" (UID: "9f7823e3-80fc-487d-9ed4-4a208f77b028"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.795041 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7823e3-80fc-487d-9ed4-4a208f77b028-kube-api-access-9v6nt" (OuterVolumeSpecName: "kube-api-access-9v6nt") pod "9f7823e3-80fc-487d-9ed4-4a208f77b028" (UID: "9f7823e3-80fc-487d-9ed4-4a208f77b028"). InnerVolumeSpecName "kube-api-access-9v6nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.808919 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-config-data" (OuterVolumeSpecName: "config-data") pod "9f7823e3-80fc-487d-9ed4-4a208f77b028" (UID: "9f7823e3-80fc-487d-9ed4-4a208f77b028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.825813 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f7823e3-80fc-487d-9ed4-4a208f77b028" (UID: "9f7823e3-80fc-487d-9ed4-4a208f77b028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.873083 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.873119 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.873134 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.873144 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v6nt\" (UniqueName: \"kubernetes.io/projected/9f7823e3-80fc-487d-9ed4-4a208f77b028-kube-api-access-9v6nt\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.873155 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:39 crc kubenswrapper[4861]: I0219 14:39:39.873163 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7823e3-80fc-487d-9ed4-4a208f77b028-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.357040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zklr2" event={"ID":"9f7823e3-80fc-487d-9ed4-4a208f77b028","Type":"ContainerDied","Data":"f27067762c95410283350f2c697774e61a9132546b5b69613c8a33ecebd17777"} Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.357103 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27067762c95410283350f2c697774e61a9132546b5b69613c8a33ecebd17777" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.357163 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zklr2" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.471684 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zklr2"] Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.480385 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zklr2"] Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.551391 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v799p"] Feb 19 14:39:40 crc kubenswrapper[4861]: E0219 14:39:40.551868 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7823e3-80fc-487d-9ed4-4a208f77b028" containerName="keystone-bootstrap" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.551884 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7823e3-80fc-487d-9ed4-4a208f77b028" containerName="keystone-bootstrap" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.552056 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7823e3-80fc-487d-9ed4-4a208f77b028" containerName="keystone-bootstrap" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.552754 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.559064 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2krkf" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.559172 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.559186 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.559261 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.559558 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.564329 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v799p"] Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.587390 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbzt\" (UniqueName: \"kubernetes.io/projected/2625ca96-572a-45aa-9d89-e04784f50306-kube-api-access-qbbzt\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.587557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-scripts\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.587680 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-fernet-keys\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.587861 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-config-data\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.588029 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-combined-ca-bundle\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.588146 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-credential-keys\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.690087 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-fernet-keys\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.690250 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-config-data\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.690379 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-combined-ca-bundle\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.690525 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-credential-keys\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.690590 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbbzt\" (UniqueName: \"kubernetes.io/projected/2625ca96-572a-45aa-9d89-e04784f50306-kube-api-access-qbbzt\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.690651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-scripts\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.695917 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-credential-keys\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.696579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-fernet-keys\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.696635 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-config-data\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.700020 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-combined-ca-bundle\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.706846 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-scripts\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.717193 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbbzt\" (UniqueName: \"kubernetes.io/projected/2625ca96-572a-45aa-9d89-e04784f50306-kube-api-access-qbbzt\") pod \"keystone-bootstrap-v799p\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:40 crc kubenswrapper[4861]: I0219 14:39:40.875296 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:41 crc kubenswrapper[4861]: I0219 14:39:41.401688 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v799p"] Feb 19 14:39:41 crc kubenswrapper[4861]: W0219 14:39:41.410360 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2625ca96_572a_45aa_9d89_e04784f50306.slice/crio-57a91d3ae60cbae242017ba82edabd443c5fcc1e8dec45d5f3f7304d3f2b438b WatchSource:0}: Error finding container 57a91d3ae60cbae242017ba82edabd443c5fcc1e8dec45d5f3f7304d3f2b438b: Status 404 returned error can't find the container with id 57a91d3ae60cbae242017ba82edabd443c5fcc1e8dec45d5f3f7304d3f2b438b Feb 19 14:39:41 crc kubenswrapper[4861]: I0219 14:39:41.988274 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7823e3-80fc-487d-9ed4-4a208f77b028" path="/var/lib/kubelet/pods/9f7823e3-80fc-487d-9ed4-4a208f77b028/volumes" Feb 19 14:39:42 crc kubenswrapper[4861]: I0219 14:39:42.380245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v799p" event={"ID":"2625ca96-572a-45aa-9d89-e04784f50306","Type":"ContainerStarted","Data":"cf529279aec1041debc0751a79088911f3ef8439d5033064b85b194f57887ff9"} Feb 19 14:39:42 crc kubenswrapper[4861]: I0219 14:39:42.380754 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v799p" event={"ID":"2625ca96-572a-45aa-9d89-e04784f50306","Type":"ContainerStarted","Data":"57a91d3ae60cbae242017ba82edabd443c5fcc1e8dec45d5f3f7304d3f2b438b"} Feb 19 14:39:42 crc kubenswrapper[4861]: I0219 14:39:42.407324 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v799p" podStartSLOduration=2.407299266 podStartE2EDuration="2.407299266s" podCreationTimestamp="2026-02-19 14:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:42.400576146 +0000 UTC m=+5397.061679394" watchObservedRunningTime="2026-02-19 14:39:42.407299266 +0000 UTC m=+5397.068402494" Feb 19 14:39:43 crc kubenswrapper[4861]: I0219 14:39:43.915630 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:39:43 crc kubenswrapper[4861]: I0219 14:39:43.999717 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c6db6cd4c-6z77x"] Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.000100 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerName="dnsmasq-dns" containerID="cri-o://da98c3a6360e56db7a5f04612be4278c9016356ab0bb08356f78d467d226f550" gracePeriod=10 Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.397878 4861 generic.go:334] "Generic (PLEG): container finished" podID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerID="da98c3a6360e56db7a5f04612be4278c9016356ab0bb08356f78d467d226f550" exitCode=0 Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.398238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" event={"ID":"a33c5e67-d791-4ea2-ae98-0e74eafa506c","Type":"ContainerDied","Data":"da98c3a6360e56db7a5f04612be4278c9016356ab0bb08356f78d467d226f550"} Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.401212 4861 generic.go:334] "Generic (PLEG): container finished" podID="2625ca96-572a-45aa-9d89-e04784f50306" containerID="cf529279aec1041debc0751a79088911f3ef8439d5033064b85b194f57887ff9" exitCode=0 Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.401254 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v799p" event={"ID":"2625ca96-572a-45aa-9d89-e04784f50306","Type":"ContainerDied","Data":"cf529279aec1041debc0751a79088911f3ef8439d5033064b85b194f57887ff9"} Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.440024 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.602439 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-nb\") pod \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.602503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-sb\") pod \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.602552 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-dns-svc\") pod \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.602632 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rzpk\" (UniqueName: \"kubernetes.io/projected/a33c5e67-d791-4ea2-ae98-0e74eafa506c-kube-api-access-9rzpk\") pod \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.602667 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-config\") pod \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\" (UID: \"a33c5e67-d791-4ea2-ae98-0e74eafa506c\") " Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.612863 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a33c5e67-d791-4ea2-ae98-0e74eafa506c-kube-api-access-9rzpk" (OuterVolumeSpecName: "kube-api-access-9rzpk") pod "a33c5e67-d791-4ea2-ae98-0e74eafa506c" (UID: "a33c5e67-d791-4ea2-ae98-0e74eafa506c"). InnerVolumeSpecName "kube-api-access-9rzpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.655782 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a33c5e67-d791-4ea2-ae98-0e74eafa506c" (UID: "a33c5e67-d791-4ea2-ae98-0e74eafa506c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.658803 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a33c5e67-d791-4ea2-ae98-0e74eafa506c" (UID: "a33c5e67-d791-4ea2-ae98-0e74eafa506c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.671948 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-config" (OuterVolumeSpecName: "config") pod "a33c5e67-d791-4ea2-ae98-0e74eafa506c" (UID: "a33c5e67-d791-4ea2-ae98-0e74eafa506c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.677884 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a33c5e67-d791-4ea2-ae98-0e74eafa506c" (UID: "a33c5e67-d791-4ea2-ae98-0e74eafa506c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.704501 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rzpk\" (UniqueName: \"kubernetes.io/projected/a33c5e67-d791-4ea2-ae98-0e74eafa506c-kube-api-access-9rzpk\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.704532 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.704542 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.704550 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:44 crc kubenswrapper[4861]: I0219 14:39:44.704559 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33c5e67-d791-4ea2-ae98-0e74eafa506c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.410654 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.410633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c6db6cd4c-6z77x" event={"ID":"a33c5e67-d791-4ea2-ae98-0e74eafa506c","Type":"ContainerDied","Data":"e8d5c32646579f68d46685ac7d55b1e225390fa091b59683305e7de745276805"} Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.410807 4861 scope.go:117] "RemoveContainer" containerID="da98c3a6360e56db7a5f04612be4278c9016356ab0bb08356f78d467d226f550" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.442192 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c6db6cd4c-6z77x"] Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.450296 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c6db6cd4c-6z77x"] Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.455214 4861 scope.go:117] "RemoveContainer" containerID="5952272a08e7057591c61c984ef6b53c92780e0030899338b5949ee52f12e9a7" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.773484 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.922650 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-config-data\") pod \"2625ca96-572a-45aa-9d89-e04784f50306\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.922695 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-credential-keys\") pod \"2625ca96-572a-45aa-9d89-e04784f50306\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.922730 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-combined-ca-bundle\") pod \"2625ca96-572a-45aa-9d89-e04784f50306\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.922763 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbbzt\" (UniqueName: \"kubernetes.io/projected/2625ca96-572a-45aa-9d89-e04784f50306-kube-api-access-qbbzt\") pod \"2625ca96-572a-45aa-9d89-e04784f50306\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.922795 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-fernet-keys\") pod \"2625ca96-572a-45aa-9d89-e04784f50306\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.922822 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-scripts\") pod \"2625ca96-572a-45aa-9d89-e04784f50306\" (UID: \"2625ca96-572a-45aa-9d89-e04784f50306\") " Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.928652 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-scripts" (OuterVolumeSpecName: "scripts") pod "2625ca96-572a-45aa-9d89-e04784f50306" (UID: "2625ca96-572a-45aa-9d89-e04784f50306"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.929518 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2625ca96-572a-45aa-9d89-e04784f50306-kube-api-access-qbbzt" (OuterVolumeSpecName: "kube-api-access-qbbzt") pod "2625ca96-572a-45aa-9d89-e04784f50306" (UID: "2625ca96-572a-45aa-9d89-e04784f50306"). InnerVolumeSpecName "kube-api-access-qbbzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.929639 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2625ca96-572a-45aa-9d89-e04784f50306" (UID: "2625ca96-572a-45aa-9d89-e04784f50306"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.934060 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2625ca96-572a-45aa-9d89-e04784f50306" (UID: "2625ca96-572a-45aa-9d89-e04784f50306"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.950718 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2625ca96-572a-45aa-9d89-e04784f50306" (UID: "2625ca96-572a-45aa-9d89-e04784f50306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.972630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-config-data" (OuterVolumeSpecName: "config-data") pod "2625ca96-572a-45aa-9d89-e04784f50306" (UID: "2625ca96-572a-45aa-9d89-e04784f50306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:39:45 crc kubenswrapper[4861]: I0219 14:39:45.992153 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" path="/var/lib/kubelet/pods/a33c5e67-d791-4ea2-ae98-0e74eafa506c/volumes" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.024793 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.024883 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.024936 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.024958 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbbzt\" (UniqueName: \"kubernetes.io/projected/2625ca96-572a-45aa-9d89-e04784f50306-kube-api-access-qbbzt\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.024982 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.025042 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2625ca96-572a-45aa-9d89-e04784f50306-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.424617 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v799p" event={"ID":"2625ca96-572a-45aa-9d89-e04784f50306","Type":"ContainerDied","Data":"57a91d3ae60cbae242017ba82edabd443c5fcc1e8dec45d5f3f7304d3f2b438b"} Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.425141 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57a91d3ae60cbae242017ba82edabd443c5fcc1e8dec45d5f3f7304d3f2b438b" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.425362 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v799p" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.634602 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-795578fb95-dhqzr"] Feb 19 14:39:46 crc kubenswrapper[4861]: E0219 14:39:46.635153 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2625ca96-572a-45aa-9d89-e04784f50306" containerName="keystone-bootstrap" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.635186 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2625ca96-572a-45aa-9d89-e04784f50306" containerName="keystone-bootstrap" Feb 19 14:39:46 crc kubenswrapper[4861]: E0219 14:39:46.635220 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerName="dnsmasq-dns" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.635235 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerName="dnsmasq-dns" Feb 19 14:39:46 crc kubenswrapper[4861]: E0219 14:39:46.635260 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerName="init" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.635273 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerName="init" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.635635 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2625ca96-572a-45aa-9d89-e04784f50306" containerName="keystone-bootstrap" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.635681 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a33c5e67-d791-4ea2-ae98-0e74eafa506c" containerName="dnsmasq-dns" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.636674 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.639813 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.640485 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.640875 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.641102 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.641201 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.642200 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2krkf" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.656871 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-795578fb95-dhqzr"] Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738323 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92j4d\" (UniqueName: \"kubernetes.io/projected/b24d56bd-e82b-4921-944e-d77ffda92dbf-kube-api-access-92j4d\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738383 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-scripts\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738408 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-public-tls-certs\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738670 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-credential-keys\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738818 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-fernet-keys\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738861 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-config-data\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.738962 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-combined-ca-bundle\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.739128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-internal-tls-certs\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.841378 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-fernet-keys\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.841486 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-config-data\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.841526 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-combined-ca-bundle\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.842895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-internal-tls-certs\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.843062 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92j4d\" (UniqueName: \"kubernetes.io/projected/b24d56bd-e82b-4921-944e-d77ffda92dbf-kube-api-access-92j4d\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.843112 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-scripts\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.843144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-public-tls-certs\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.843224 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-credential-keys\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.847546 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-config-data\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.847852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-fernet-keys\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.848545 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-combined-ca-bundle\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.848585 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-credential-keys\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.848858 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-internal-tls-certs\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.849828 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-scripts\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.850624 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d56bd-e82b-4921-944e-d77ffda92dbf-public-tls-certs\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.870809 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92j4d\" (UniqueName: \"kubernetes.io/projected/b24d56bd-e82b-4921-944e-d77ffda92dbf-kube-api-access-92j4d\") pod \"keystone-795578fb95-dhqzr\" (UID: \"b24d56bd-e82b-4921-944e-d77ffda92dbf\") " pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:46 crc kubenswrapper[4861]: I0219 14:39:46.960464 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:47 crc kubenswrapper[4861]: I0219 14:39:47.478923 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-795578fb95-dhqzr"] Feb 19 14:39:47 crc kubenswrapper[4861]: W0219 14:39:47.482410 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb24d56bd_e82b_4921_944e_d77ffda92dbf.slice/crio-8bc827f6c9c7bdacc95a49a7b0e90087fccace3ede27ad19658608c823c3eb8d WatchSource:0}: Error finding container 8bc827f6c9c7bdacc95a49a7b0e90087fccace3ede27ad19658608c823c3eb8d: Status 404 returned error can't find the container with id 8bc827f6c9c7bdacc95a49a7b0e90087fccace3ede27ad19658608c823c3eb8d Feb 19 14:39:47 crc kubenswrapper[4861]: I0219 14:39:47.977701 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:39:47 crc kubenswrapper[4861]: E0219 14:39:47.978493 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:39:48 crc kubenswrapper[4861]: I0219 14:39:48.458052 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-795578fb95-dhqzr" event={"ID":"b24d56bd-e82b-4921-944e-d77ffda92dbf","Type":"ContainerStarted","Data":"f9394db328001ea8e567f5e2bbdd5e7aed34132f6237862d2374493e148b6f93"} Feb 19 14:39:48 crc kubenswrapper[4861]: I0219 14:39:48.458120 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-795578fb95-dhqzr" event={"ID":"b24d56bd-e82b-4921-944e-d77ffda92dbf","Type":"ContainerStarted","Data":"8bc827f6c9c7bdacc95a49a7b0e90087fccace3ede27ad19658608c823c3eb8d"} Feb 19 14:39:48 crc kubenswrapper[4861]: I0219 14:39:48.458480 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:39:48 crc kubenswrapper[4861]: I0219 14:39:48.484442 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-795578fb95-dhqzr" podStartSLOduration=2.484407404 podStartE2EDuration="2.484407404s" podCreationTimestamp="2026-02-19 14:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:39:48.479879323 +0000 UTC m=+5403.140982621" watchObservedRunningTime="2026-02-19 14:39:48.484407404 +0000 UTC m=+5403.145510622" Feb 19 14:40:01 crc kubenswrapper[4861]: I0219 14:40:01.978154 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:40:01 crc kubenswrapper[4861]: E0219 14:40:01.980463 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:40:12 crc kubenswrapper[4861]: I0219 14:40:12.979760 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:40:12 crc kubenswrapper[4861]: E0219 14:40:12.980949 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:40:18 crc kubenswrapper[4861]: I0219 14:40:18.374404 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-795578fb95-dhqzr" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.485804 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.487563 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.491750 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.495375 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.495933 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.496468 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jd26d" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.633470 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.633546 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbnl6\" (UniqueName: \"kubernetes.io/projected/5a69b964-cce9-4112-86e5-3984e1706034-kube-api-access-xbnl6\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.633679 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.633763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.735633 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.735703 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbnl6\" (UniqueName: \"kubernetes.io/projected/5a69b964-cce9-4112-86e5-3984e1706034-kube-api-access-xbnl6\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.735823 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.735905 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.737207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.746098 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.746117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.763765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbnl6\" (UniqueName: \"kubernetes.io/projected/5a69b964-cce9-4112-86e5-3984e1706034-kube-api-access-xbnl6\") pod \"openstackclient\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " pod="openstack/openstackclient" Feb 19 14:40:22 crc kubenswrapper[4861]: I0219 14:40:22.818169 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:40:23 crc kubenswrapper[4861]: I0219 14:40:23.307721 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 14:40:23 crc kubenswrapper[4861]: W0219 14:40:23.316020 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a69b964_cce9_4112_86e5_3984e1706034.slice/crio-3e035c62ff1903cf4bcbd93cd3b43afa030ef39f5631563b1d11b4dc2ac7b783 WatchSource:0}: Error finding container 3e035c62ff1903cf4bcbd93cd3b43afa030ef39f5631563b1d11b4dc2ac7b783: Status 404 returned error can't find the container with id 3e035c62ff1903cf4bcbd93cd3b43afa030ef39f5631563b1d11b4dc2ac7b783 Feb 19 14:40:23 crc kubenswrapper[4861]: I0219 14:40:23.823862 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5a69b964-cce9-4112-86e5-3984e1706034","Type":"ContainerStarted","Data":"c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960"} Feb 19 14:40:23 crc kubenswrapper[4861]: I0219 14:40:23.824222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5a69b964-cce9-4112-86e5-3984e1706034","Type":"ContainerStarted","Data":"3e035c62ff1903cf4bcbd93cd3b43afa030ef39f5631563b1d11b4dc2ac7b783"} Feb 19 14:40:25 crc kubenswrapper[4861]: I0219 14:40:25.986957 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:40:25 crc kubenswrapper[4861]: E0219 14:40:25.987525 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:40:36 crc kubenswrapper[4861]: I0219 14:40:36.977573 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:40:36 crc kubenswrapper[4861]: E0219 14:40:36.978828 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:40:49 crc kubenswrapper[4861]: I0219 14:40:49.977252 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:40:49 crc kubenswrapper[4861]: E0219 14:40:49.978642 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:41:03 crc kubenswrapper[4861]: I0219 14:41:03.981668 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:41:03 crc kubenswrapper[4861]: E0219 14:41:03.982751 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:41:16 crc kubenswrapper[4861]: I0219 14:41:16.977485 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:41:16 crc kubenswrapper[4861]: E0219 14:41:16.978402 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.435924 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=61.43590111 podStartE2EDuration="1m1.43590111s" podCreationTimestamp="2026-02-19 14:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:40:23.846693294 +0000 UTC m=+5438.507796522" watchObservedRunningTime="2026-02-19 14:41:23.43590111 +0000 UTC m=+5498.097004338" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.439157 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrcw7"] Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.441010 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.446408 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrcw7"] Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.540395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-catalog-content\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.540467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-utilities\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.540611 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbnr\" (UniqueName: \"kubernetes.io/projected/cf595191-6233-4481-800d-d6a94b0b8a01-kube-api-access-jlbnr\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.641772 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbnr\" (UniqueName: \"kubernetes.io/projected/cf595191-6233-4481-800d-d6a94b0b8a01-kube-api-access-jlbnr\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.641896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-catalog-content\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.641938 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-utilities\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.642400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-utilities\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.642584 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-catalog-content\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.664680 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbnr\" (UniqueName: \"kubernetes.io/projected/cf595191-6233-4481-800d-d6a94b0b8a01-kube-api-access-jlbnr\") pod \"redhat-marketplace-vrcw7\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:23 crc kubenswrapper[4861]: I0219 14:41:23.785384 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:24 crc kubenswrapper[4861]: I0219 14:41:24.245790 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrcw7"] Feb 19 14:41:24 crc kubenswrapper[4861]: I0219 14:41:24.404555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerStarted","Data":"8642bbe46513d0472c84e77515bcf4f87195f4dfac6d0fae15f6743e459f6056"} Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.414519 4861 generic.go:334] "Generic (PLEG): container finished" podID="cf595191-6233-4481-800d-d6a94b0b8a01" containerID="21b9698cfaaef58a4f269bb37ae16b2fd59364004165ae3f6d579f2b70a37f0a" exitCode=0 Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.414559 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerDied","Data":"21b9698cfaaef58a4f269bb37ae16b2fd59364004165ae3f6d579f2b70a37f0a"} Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.581476 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rh95x"] Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.586677 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.595320 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rh95x"] Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.674972 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzjw\" (UniqueName: \"kubernetes.io/projected/9e15db02-a607-4445-b11c-b84abac43d0d-kube-api-access-8kzjw\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.675043 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-catalog-content\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.675205 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-utilities\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.777279 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzjw\" (UniqueName: \"kubernetes.io/projected/9e15db02-a607-4445-b11c-b84abac43d0d-kube-api-access-8kzjw\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.777352 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-catalog-content\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.777441 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-utilities\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.778056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-utilities\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.778649 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-catalog-content\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.822512 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzjw\" (UniqueName: \"kubernetes.io/projected/9e15db02-a607-4445-b11c-b84abac43d0d-kube-api-access-8kzjw\") pod \"redhat-operators-rh95x\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:25 crc kubenswrapper[4861]: I0219 14:41:25.910301 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:26 crc kubenswrapper[4861]: I0219 14:41:26.351891 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rh95x"] Feb 19 14:41:26 crc kubenswrapper[4861]: W0219 14:41:26.357765 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e15db02_a607_4445_b11c_b84abac43d0d.slice/crio-0d1f69ddd05322f9baeb62acc99e90e55226c10772fda342212bc60b2b5f80a4 WatchSource:0}: Error finding container 0d1f69ddd05322f9baeb62acc99e90e55226c10772fda342212bc60b2b5f80a4: Status 404 returned error can't find the container with id 0d1f69ddd05322f9baeb62acc99e90e55226c10772fda342212bc60b2b5f80a4 Feb 19 14:41:26 crc kubenswrapper[4861]: I0219 14:41:26.426987 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh95x" event={"ID":"9e15db02-a607-4445-b11c-b84abac43d0d","Type":"ContainerStarted","Data":"0d1f69ddd05322f9baeb62acc99e90e55226c10772fda342212bc60b2b5f80a4"} Feb 19 14:41:26 crc kubenswrapper[4861]: I0219 14:41:26.429530 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerStarted","Data":"b549092eda2fba7c94d8683b960a06354a8d75c384567b91febdacd595b05830"} Feb 19 14:41:27 crc kubenswrapper[4861]: I0219 14:41:27.440689 4861 generic.go:334] "Generic (PLEG): container finished" podID="9e15db02-a607-4445-b11c-b84abac43d0d" containerID="3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515" exitCode=0 Feb 19 14:41:27 crc kubenswrapper[4861]: I0219 14:41:27.440817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh95x" event={"ID":"9e15db02-a607-4445-b11c-b84abac43d0d","Type":"ContainerDied","Data":"3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515"} Feb 19 14:41:27 crc kubenswrapper[4861]: I0219 14:41:27.446007 4861 generic.go:334] "Generic (PLEG): container finished" podID="cf595191-6233-4481-800d-d6a94b0b8a01" containerID="b549092eda2fba7c94d8683b960a06354a8d75c384567b91febdacd595b05830" exitCode=0 Feb 19 14:41:27 crc kubenswrapper[4861]: I0219 14:41:27.446052 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerDied","Data":"b549092eda2fba7c94d8683b960a06354a8d75c384567b91febdacd595b05830"} Feb 19 14:41:28 crc kubenswrapper[4861]: I0219 14:41:28.456722 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerStarted","Data":"baeca92d5999f9cc2605fad88ec97a395286af7b30ce687dee2f778147293654"} Feb 19 14:41:28 crc kubenswrapper[4861]: I0219 14:41:28.489835 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrcw7" podStartSLOduration=3.057041195 podStartE2EDuration="5.4898151s" podCreationTimestamp="2026-02-19 14:41:23 +0000 UTC" firstStartedPulling="2026-02-19 14:41:25.416594792 +0000 UTC m=+5500.077698020" lastFinishedPulling="2026-02-19 14:41:27.849368657 +0000 UTC m=+5502.510471925" observedRunningTime="2026-02-19 14:41:28.481096717 +0000 UTC m=+5503.142199955" watchObservedRunningTime="2026-02-19 14:41:28.4898151 +0000 UTC m=+5503.150918348" Feb 19 14:41:29 crc kubenswrapper[4861]: I0219 14:41:29.469312 4861 generic.go:334] "Generic (PLEG): container finished" podID="9e15db02-a607-4445-b11c-b84abac43d0d" containerID="ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc" exitCode=0 Feb 19 14:41:29 crc kubenswrapper[4861]: I0219 14:41:29.469462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh95x" event={"ID":"9e15db02-a607-4445-b11c-b84abac43d0d","Type":"ContainerDied","Data":"ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc"} Feb 19 14:41:30 crc kubenswrapper[4861]: I0219 14:41:30.481967 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh95x" event={"ID":"9e15db02-a607-4445-b11c-b84abac43d0d","Type":"ContainerStarted","Data":"1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4"} Feb 19 14:41:30 crc kubenswrapper[4861]: I0219 14:41:30.502189 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rh95x" podStartSLOduration=3.080234978 podStartE2EDuration="5.502161932s" podCreationTimestamp="2026-02-19 14:41:25 +0000 UTC" firstStartedPulling="2026-02-19 14:41:27.443733 +0000 UTC m=+5502.104836228" lastFinishedPulling="2026-02-19 14:41:29.865659964 +0000 UTC m=+5504.526763182" observedRunningTime="2026-02-19 14:41:30.498789131 +0000 UTC m=+5505.159892379" watchObservedRunningTime="2026-02-19 14:41:30.502161932 +0000 UTC m=+5505.163265200" Feb 19 14:41:30 crc kubenswrapper[4861]: I0219 14:41:30.977376 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:41:30 crc kubenswrapper[4861]: E0219 14:41:30.977702 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:41:33 crc kubenswrapper[4861]: I0219 14:41:33.785726 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:33 crc kubenswrapper[4861]: I0219 14:41:33.786263 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:33 crc kubenswrapper[4861]: I0219 14:41:33.854171 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:34 crc kubenswrapper[4861]: I0219 14:41:34.564406 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:35 crc kubenswrapper[4861]: I0219 14:41:35.174909 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrcw7"] Feb 19 14:41:35 crc kubenswrapper[4861]: I0219 14:41:35.911539 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:35 crc kubenswrapper[4861]: I0219 14:41:35.912769 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:36 crc kubenswrapper[4861]: I0219 14:41:36.538566 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrcw7" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="registry-server" containerID="cri-o://baeca92d5999f9cc2605fad88ec97a395286af7b30ce687dee2f778147293654" gracePeriod=2 Feb 19 14:41:36 crc kubenswrapper[4861]: I0219 14:41:36.986121 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rh95x" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="registry-server" probeResult="failure" output=< Feb 19 14:41:36 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:41:36 crc kubenswrapper[4861]: > Feb 19 14:41:37 crc kubenswrapper[4861]: I0219 14:41:37.550490 4861 generic.go:334] "Generic (PLEG): container finished" podID="cf595191-6233-4481-800d-d6a94b0b8a01" containerID="baeca92d5999f9cc2605fad88ec97a395286af7b30ce687dee2f778147293654" exitCode=0 Feb 19 14:41:37 crc kubenswrapper[4861]: I0219 14:41:37.550550 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerDied","Data":"baeca92d5999f9cc2605fad88ec97a395286af7b30ce687dee2f778147293654"} Feb 19 14:41:37 crc kubenswrapper[4861]: I0219 14:41:37.983850 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.106968 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlbnr\" (UniqueName: \"kubernetes.io/projected/cf595191-6233-4481-800d-d6a94b0b8a01-kube-api-access-jlbnr\") pod \"cf595191-6233-4481-800d-d6a94b0b8a01\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.107202 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-catalog-content\") pod \"cf595191-6233-4481-800d-d6a94b0b8a01\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.107232 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-utilities\") pod \"cf595191-6233-4481-800d-d6a94b0b8a01\" (UID: \"cf595191-6233-4481-800d-d6a94b0b8a01\") " Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.108025 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-utilities" (OuterVolumeSpecName: "utilities") pod "cf595191-6233-4481-800d-d6a94b0b8a01" (UID: "cf595191-6233-4481-800d-d6a94b0b8a01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.128406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf595191-6233-4481-800d-d6a94b0b8a01" (UID: "cf595191-6233-4481-800d-d6a94b0b8a01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.129152 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf595191-6233-4481-800d-d6a94b0b8a01-kube-api-access-jlbnr" (OuterVolumeSpecName: "kube-api-access-jlbnr") pod "cf595191-6233-4481-800d-d6a94b0b8a01" (UID: "cf595191-6233-4481-800d-d6a94b0b8a01"). InnerVolumeSpecName "kube-api-access-jlbnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.208821 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.209268 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf595191-6233-4481-800d-d6a94b0b8a01-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.209283 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlbnr\" (UniqueName: \"kubernetes.io/projected/cf595191-6233-4481-800d-d6a94b0b8a01-kube-api-access-jlbnr\") on node \"crc\" DevicePath \"\"" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.565616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrcw7" event={"ID":"cf595191-6233-4481-800d-d6a94b0b8a01","Type":"ContainerDied","Data":"8642bbe46513d0472c84e77515bcf4f87195f4dfac6d0fae15f6743e459f6056"} Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.565711 4861 scope.go:117] "RemoveContainer" containerID="baeca92d5999f9cc2605fad88ec97a395286af7b30ce687dee2f778147293654" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.565714 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrcw7" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.600410 4861 scope.go:117] "RemoveContainer" containerID="b549092eda2fba7c94d8683b960a06354a8d75c384567b91febdacd595b05830" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.633750 4861 scope.go:117] "RemoveContainer" containerID="21b9698cfaaef58a4f269bb37ae16b2fd59364004165ae3f6d579f2b70a37f0a" Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.639588 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrcw7"] Feb 19 14:41:38 crc kubenswrapper[4861]: I0219 14:41:38.655611 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrcw7"] Feb 19 14:41:39 crc kubenswrapper[4861]: I0219 14:41:39.997663 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" path="/var/lib/kubelet/pods/cf595191-6233-4481-800d-d6a94b0b8a01/volumes" Feb 19 14:41:44 crc kubenswrapper[4861]: I0219 14:41:44.978148 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:41:44 crc kubenswrapper[4861]: E0219 14:41:44.980084 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:41:45 crc kubenswrapper[4861]: I0219 14:41:45.973315 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:46 crc kubenswrapper[4861]: I0219 14:41:46.056614 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:46 crc kubenswrapper[4861]: I0219 14:41:46.223317 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rh95x"] Feb 19 14:41:47 crc kubenswrapper[4861]: I0219 14:41:47.659123 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rh95x" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="registry-server" containerID="cri-o://1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4" gracePeriod=2 Feb 19 14:41:47 crc kubenswrapper[4861]: E0219 14:41:47.813593 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e15db02_a607_4445_b11c_b84abac43d0d.slice/crio-conmon-1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e15db02_a607_4445_b11c_b84abac43d0d.slice/crio-1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4.scope\": RecentStats: unable to find data in memory cache]" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.147119 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.304192 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-catalog-content\") pod \"9e15db02-a607-4445-b11c-b84abac43d0d\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.304276 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-utilities\") pod \"9e15db02-a607-4445-b11c-b84abac43d0d\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.304414 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kzjw\" (UniqueName: \"kubernetes.io/projected/9e15db02-a607-4445-b11c-b84abac43d0d-kube-api-access-8kzjw\") pod \"9e15db02-a607-4445-b11c-b84abac43d0d\" (UID: \"9e15db02-a607-4445-b11c-b84abac43d0d\") " Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.307026 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-utilities" (OuterVolumeSpecName: "utilities") pod "9e15db02-a607-4445-b11c-b84abac43d0d" (UID: "9e15db02-a607-4445-b11c-b84abac43d0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.320196 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e15db02-a607-4445-b11c-b84abac43d0d-kube-api-access-8kzjw" (OuterVolumeSpecName: "kube-api-access-8kzjw") pod "9e15db02-a607-4445-b11c-b84abac43d0d" (UID: "9e15db02-a607-4445-b11c-b84abac43d0d"). InnerVolumeSpecName "kube-api-access-8kzjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.407699 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kzjw\" (UniqueName: \"kubernetes.io/projected/9e15db02-a607-4445-b11c-b84abac43d0d-kube-api-access-8kzjw\") on node \"crc\" DevicePath \"\"" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.407755 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.484874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e15db02-a607-4445-b11c-b84abac43d0d" (UID: "9e15db02-a607-4445-b11c-b84abac43d0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.509733 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e15db02-a607-4445-b11c-b84abac43d0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.673896 4861 generic.go:334] "Generic (PLEG): container finished" podID="9e15db02-a607-4445-b11c-b84abac43d0d" containerID="1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4" exitCode=0 Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.673984 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh95x" event={"ID":"9e15db02-a607-4445-b11c-b84abac43d0d","Type":"ContainerDied","Data":"1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4"} Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.674032 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh95x" event={"ID":"9e15db02-a607-4445-b11c-b84abac43d0d","Type":"ContainerDied","Data":"0d1f69ddd05322f9baeb62acc99e90e55226c10772fda342212bc60b2b5f80a4"} Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.674061 4861 scope.go:117] "RemoveContainer" containerID="1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.674244 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh95x" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.695392 4861 scope.go:117] "RemoveContainer" containerID="ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.719627 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rh95x"] Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.731905 4861 scope.go:117] "RemoveContainer" containerID="3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.736489 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rh95x"] Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.773606 4861 scope.go:117] "RemoveContainer" containerID="1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4" Feb 19 14:41:48 crc kubenswrapper[4861]: E0219 14:41:48.777456 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4\": container with ID starting with 1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4 not found: ID does not exist" containerID="1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.777497 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4"} err="failed to get container status \"1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4\": rpc error: code = NotFound desc = could not find container \"1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4\": container with ID starting with 1f2594582d3999d103a78aa26c5144893c501d07b555f1e463e5bacf415cfab4 not found: ID does not exist" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.777526 4861 scope.go:117] "RemoveContainer" containerID="ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc" Feb 19 14:41:48 crc kubenswrapper[4861]: E0219 14:41:48.777939 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc\": container with ID starting with ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc not found: ID does not exist" containerID="ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.777962 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc"} err="failed to get container status \"ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc\": rpc error: code = NotFound desc = could not find container \"ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc\": container with ID starting with ddeb1ca45effa4be295c87a374db49e226815ff0af903d83a99b02942b7e3ddc not found: ID does not exist" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.777975 4861 scope.go:117] "RemoveContainer" containerID="3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515" Feb 19 14:41:48 crc kubenswrapper[4861]: E0219 14:41:48.778271 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515\": container with ID starting with 3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515 not found: ID does not exist" containerID="3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515" Feb 19 14:41:48 crc kubenswrapper[4861]: I0219 14:41:48.778292 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515"} err="failed to get container status \"3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515\": rpc error: code = NotFound desc = could not find container \"3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515\": container with ID starting with 3625d836a3d80c8bb6efe9fabddfd0c741afb6bd6c4e2d347eb991f6a6953515 not found: ID does not exist" Feb 19 14:41:49 crc kubenswrapper[4861]: I0219 14:41:49.997797 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" path="/var/lib/kubelet/pods/9e15db02-a607-4445-b11c-b84abac43d0d/volumes" Feb 19 14:41:56 crc kubenswrapper[4861]: I0219 14:41:56.978205 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:41:56 crc kubenswrapper[4861]: E0219 14:41:56.978928 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.950264 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-qm9t5"] Feb 19 14:42:04 crc kubenswrapper[4861]: E0219 14:42:04.951253 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="registry-server" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951273 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="registry-server" Feb 19 14:42:04 crc kubenswrapper[4861]: E0219 14:42:04.951293 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="extract-content" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951303 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="extract-content" Feb 19 14:42:04 crc kubenswrapper[4861]: E0219 14:42:04.951331 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="extract-utilities" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951342 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="extract-utilities" Feb 19 14:42:04 crc kubenswrapper[4861]: E0219 14:42:04.951366 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="extract-utilities" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951376 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="extract-utilities" Feb 19 14:42:04 crc kubenswrapper[4861]: E0219 14:42:04.951402 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="extract-content" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951413 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="extract-content" Feb 19 14:42:04 crc kubenswrapper[4861]: E0219 14:42:04.951455 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="registry-server" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951465 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="registry-server" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951700 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf595191-6233-4481-800d-d6a94b0b8a01" containerName="registry-server" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.951729 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e15db02-a607-4445-b11c-b84abac43d0d" containerName="registry-server" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.952590 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:04 crc kubenswrapper[4861]: I0219 14:42:04.964824 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qm9t5"] Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.036691 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-57b5-account-create-update-rg69w"] Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.037903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.040252 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.045514 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-57b5-account-create-update-rg69w"] Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.138080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzhlm\" (UniqueName: \"kubernetes.io/projected/4e02f475-200d-492f-8ca6-b8848de71272-kube-api-access-bzhlm\") pod \"barbican-db-create-qm9t5\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.138833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e02f475-200d-492f-8ca6-b8848de71272-operator-scripts\") pod \"barbican-db-create-qm9t5\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.240581 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzhlm\" (UniqueName: \"kubernetes.io/projected/4e02f475-200d-492f-8ca6-b8848de71272-kube-api-access-bzhlm\") pod \"barbican-db-create-qm9t5\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.240829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e02f475-200d-492f-8ca6-b8848de71272-operator-scripts\") pod \"barbican-db-create-qm9t5\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.240943 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-operator-scripts\") pod \"barbican-57b5-account-create-update-rg69w\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.241017 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdttt\" (UniqueName: \"kubernetes.io/projected/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-kube-api-access-fdttt\") pod \"barbican-57b5-account-create-update-rg69w\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.243110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e02f475-200d-492f-8ca6-b8848de71272-operator-scripts\") pod \"barbican-db-create-qm9t5\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.275273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzhlm\" (UniqueName: \"kubernetes.io/projected/4e02f475-200d-492f-8ca6-b8848de71272-kube-api-access-bzhlm\") pod \"barbican-db-create-qm9t5\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.283927 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.342637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-operator-scripts\") pod \"barbican-57b5-account-create-update-rg69w\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.342674 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdttt\" (UniqueName: \"kubernetes.io/projected/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-kube-api-access-fdttt\") pod \"barbican-57b5-account-create-update-rg69w\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.343684 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-operator-scripts\") pod \"barbican-57b5-account-create-update-rg69w\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.360978 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdttt\" (UniqueName: \"kubernetes.io/projected/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-kube-api-access-fdttt\") pod \"barbican-57b5-account-create-update-rg69w\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.367880 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.771158 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qm9t5"] Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.851873 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qm9t5" event={"ID":"4e02f475-200d-492f-8ca6-b8848de71272","Type":"ContainerStarted","Data":"138a6c27cc464ad8a865ab27ba76f0c9b13e51ed281fe56019cd3ad5c8065d8c"} Feb 19 14:42:05 crc kubenswrapper[4861]: I0219 14:42:05.854699 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-57b5-account-create-update-rg69w"] Feb 19 14:42:05 crc kubenswrapper[4861]: W0219 14:42:05.875580 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3dabfb1_c198_4e76_a1d4_fe8f895e48bf.slice/crio-d917d1f24ff83f6cf8916d29f659220be38e585eb5f0846356ee08a0bd375dfd WatchSource:0}: Error finding container d917d1f24ff83f6cf8916d29f659220be38e585eb5f0846356ee08a0bd375dfd: Status 404 returned error can't find the container with id d917d1f24ff83f6cf8916d29f659220be38e585eb5f0846356ee08a0bd375dfd Feb 19 14:42:06 crc kubenswrapper[4861]: I0219 14:42:06.873492 4861 generic.go:334] "Generic (PLEG): container finished" podID="4e02f475-200d-492f-8ca6-b8848de71272" containerID="2a7c8d29dd409300d78efabce9b9a3aac8ae88905d33d01cc032e2eb1b0f3998" exitCode=0 Feb 19 14:42:06 crc kubenswrapper[4861]: I0219 14:42:06.875186 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qm9t5" event={"ID":"4e02f475-200d-492f-8ca6-b8848de71272","Type":"ContainerDied","Data":"2a7c8d29dd409300d78efabce9b9a3aac8ae88905d33d01cc032e2eb1b0f3998"} Feb 19 14:42:06 crc kubenswrapper[4861]: I0219 14:42:06.878565 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" containerID="f2abd28c71a3dde201ea408a183b5d29c7ce3cff3a661bae175bb6817eddb0be" exitCode=0 Feb 19 14:42:06 crc kubenswrapper[4861]: I0219 14:42:06.878634 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-57b5-account-create-update-rg69w" event={"ID":"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf","Type":"ContainerDied","Data":"f2abd28c71a3dde201ea408a183b5d29c7ce3cff3a661bae175bb6817eddb0be"} Feb 19 14:42:06 crc kubenswrapper[4861]: I0219 14:42:06.878673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-57b5-account-create-update-rg69w" event={"ID":"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf","Type":"ContainerStarted","Data":"d917d1f24ff83f6cf8916d29f659220be38e585eb5f0846356ee08a0bd375dfd"} Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.326951 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.334985 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.499753 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzhlm\" (UniqueName: \"kubernetes.io/projected/4e02f475-200d-492f-8ca6-b8848de71272-kube-api-access-bzhlm\") pod \"4e02f475-200d-492f-8ca6-b8848de71272\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.500110 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdttt\" (UniqueName: \"kubernetes.io/projected/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-kube-api-access-fdttt\") pod \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.500490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-operator-scripts\") pod \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\" (UID: \"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf\") " Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.500633 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e02f475-200d-492f-8ca6-b8848de71272-operator-scripts\") pod \"4e02f475-200d-492f-8ca6-b8848de71272\" (UID: \"4e02f475-200d-492f-8ca6-b8848de71272\") " Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.501291 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" (UID: "d3dabfb1-c198-4e76-a1d4-fe8f895e48bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.501333 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e02f475-200d-492f-8ca6-b8848de71272-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e02f475-200d-492f-8ca6-b8848de71272" (UID: "4e02f475-200d-492f-8ca6-b8848de71272"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.508562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e02f475-200d-492f-8ca6-b8848de71272-kube-api-access-bzhlm" (OuterVolumeSpecName: "kube-api-access-bzhlm") pod "4e02f475-200d-492f-8ca6-b8848de71272" (UID: "4e02f475-200d-492f-8ca6-b8848de71272"). InnerVolumeSpecName "kube-api-access-bzhlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.518720 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-kube-api-access-fdttt" (OuterVolumeSpecName: "kube-api-access-fdttt") pod "d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" (UID: "d3dabfb1-c198-4e76-a1d4-fe8f895e48bf"). InnerVolumeSpecName "kube-api-access-fdttt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.602485 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdttt\" (UniqueName: \"kubernetes.io/projected/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-kube-api-access-fdttt\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.602534 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.602553 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e02f475-200d-492f-8ca6-b8848de71272-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.602577 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzhlm\" (UniqueName: \"kubernetes.io/projected/4e02f475-200d-492f-8ca6-b8848de71272-kube-api-access-bzhlm\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.901911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-57b5-account-create-update-rg69w" event={"ID":"d3dabfb1-c198-4e76-a1d4-fe8f895e48bf","Type":"ContainerDied","Data":"d917d1f24ff83f6cf8916d29f659220be38e585eb5f0846356ee08a0bd375dfd"} Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.902243 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d917d1f24ff83f6cf8916d29f659220be38e585eb5f0846356ee08a0bd375dfd" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.901961 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-57b5-account-create-update-rg69w" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.904414 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qm9t5" event={"ID":"4e02f475-200d-492f-8ca6-b8848de71272","Type":"ContainerDied","Data":"138a6c27cc464ad8a865ab27ba76f0c9b13e51ed281fe56019cd3ad5c8065d8c"} Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.904535 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138a6c27cc464ad8a865ab27ba76f0c9b13e51ed281fe56019cd3ad5c8065d8c" Feb 19 14:42:08 crc kubenswrapper[4861]: I0219 14:42:08.904786 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qm9t5" Feb 19 14:42:09 crc kubenswrapper[4861]: I0219 14:42:09.977552 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:42:09 crc kubenswrapper[4861]: E0219 14:42:09.978242 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.345818 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-n6pq7"] Feb 19 14:42:10 crc kubenswrapper[4861]: E0219 14:42:10.348126 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e02f475-200d-492f-8ca6-b8848de71272" containerName="mariadb-database-create" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.348234 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e02f475-200d-492f-8ca6-b8848de71272" containerName="mariadb-database-create" Feb 19 14:42:10 crc kubenswrapper[4861]: E0219 14:42:10.348375 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" containerName="mariadb-account-create-update" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.348499 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" containerName="mariadb-account-create-update" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.348951 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" containerName="mariadb-account-create-update" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.349005 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e02f475-200d-492f-8ca6-b8848de71272" containerName="mariadb-database-create" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.350248 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.353296 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.357906 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hqdwr" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.361873 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-n6pq7"] Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.537839 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-combined-ca-bundle\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.538252 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prw7h\" (UniqueName: \"kubernetes.io/projected/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-kube-api-access-prw7h\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.538523 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-db-sync-config-data\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.640494 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prw7h\" (UniqueName: \"kubernetes.io/projected/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-kube-api-access-prw7h\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.640621 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-db-sync-config-data\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.640737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-combined-ca-bundle\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.647061 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-db-sync-config-data\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.650644 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-combined-ca-bundle\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.662041 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prw7h\" (UniqueName: \"kubernetes.io/projected/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-kube-api-access-prw7h\") pod \"barbican-db-sync-n6pq7\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:10 crc kubenswrapper[4861]: I0219 14:42:10.686336 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:11 crc kubenswrapper[4861]: I0219 14:42:11.235950 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-n6pq7"] Feb 19 14:42:11 crc kubenswrapper[4861]: I0219 14:42:11.950924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n6pq7" event={"ID":"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f","Type":"ContainerStarted","Data":"0cfc31d75d350cf6d4d224794857aecaadaa916a2b4482c792794e647e04fa99"} Feb 19 14:42:11 crc kubenswrapper[4861]: I0219 14:42:11.951393 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n6pq7" event={"ID":"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f","Type":"ContainerStarted","Data":"d6b9dcd7701051e71ba32493c4192d5916a4a9e0ae423e7862a6d9ac4a224eb7"} Feb 19 14:42:11 crc kubenswrapper[4861]: I0219 14:42:11.976443 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-n6pq7" podStartSLOduration=1.9764102650000002 podStartE2EDuration="1.976410265s" podCreationTimestamp="2026-02-19 14:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:42:11.967298881 +0000 UTC m=+5546.628402159" watchObservedRunningTime="2026-02-19 14:42:11.976410265 +0000 UTC m=+5546.637513493" Feb 19 14:42:13 crc kubenswrapper[4861]: I0219 14:42:13.971382 4861 generic.go:334] "Generic (PLEG): container finished" podID="4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" containerID="0cfc31d75d350cf6d4d224794857aecaadaa916a2b4482c792794e647e04fa99" exitCode=0 Feb 19 14:42:13 crc kubenswrapper[4861]: I0219 14:42:13.971514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n6pq7" event={"ID":"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f","Type":"ContainerDied","Data":"0cfc31d75d350cf6d4d224794857aecaadaa916a2b4482c792794e647e04fa99"} Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.342484 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.536568 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-db-sync-config-data\") pod \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.536793 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prw7h\" (UniqueName: \"kubernetes.io/projected/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-kube-api-access-prw7h\") pod \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.536913 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-combined-ca-bundle\") pod \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\" (UID: \"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f\") " Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.548591 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-kube-api-access-prw7h" (OuterVolumeSpecName: "kube-api-access-prw7h") pod "4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" (UID: "4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f"). InnerVolumeSpecName "kube-api-access-prw7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.552189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" (UID: "4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.577537 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" (UID: "4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.639208 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.639250 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.639263 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prw7h\" (UniqueName: \"kubernetes.io/projected/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f-kube-api-access-prw7h\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.993041 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n6pq7" Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.993775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n6pq7" event={"ID":"4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f","Type":"ContainerDied","Data":"d6b9dcd7701051e71ba32493c4192d5916a4a9e0ae423e7862a6d9ac4a224eb7"} Feb 19 14:42:15 crc kubenswrapper[4861]: I0219 14:42:15.993833 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6b9dcd7701051e71ba32493c4192d5916a4a9e0ae423e7862a6d9ac4a224eb7" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.176478 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-f7b4998cd-bqbdt"] Feb 19 14:42:16 crc kubenswrapper[4861]: E0219 14:42:16.177210 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" containerName="barbican-db-sync" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.177376 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" containerName="barbican-db-sync" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.177795 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" containerName="barbican-db-sync" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.179283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.183513 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.191093 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.194051 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hqdwr" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.195160 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7ff999c95-tks6q"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.196865 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.199282 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.210360 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f7b4998cd-bqbdt"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.219502 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7ff999c95-tks6q"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.252350 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f4b849877-8n64j"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.254092 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.276725 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4b849877-8n64j"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353252 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhn48\" (UniqueName: \"kubernetes.io/projected/caeb0702-4acd-43f7-bb98-659931b75efa-kube-api-access-xhn48\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353329 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/054d3fa9-6f6e-4d14-8759-b626a8ff268b-logs\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353359 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caeb0702-4acd-43f7-bb98-659931b75efa-logs\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353379 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-combined-ca-bundle\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353401 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-config-data-custom\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353478 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-config-data\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.353657 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-config-data\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.354309 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-combined-ca-bundle\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.354368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjhk\" (UniqueName: \"kubernetes.io/projected/054d3fa9-6f6e-4d14-8759-b626a8ff268b-kube-api-access-2fjhk\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.354403 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-config-data-custom\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.366742 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84c8b67fb4-7nc22"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.368069 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.374525 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.381538 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84c8b67fb4-7nc22"] Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455645 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-combined-ca-bundle\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-config-data-custom\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455716 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-config-data\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455751 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624z2\" (UniqueName: \"kubernetes.io/projected/441b87bd-385d-439e-9248-b0f8dfac75c4-kube-api-access-624z2\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455772 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455788 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455802 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data-custom\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455819 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-combined-ca-bundle\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455854 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-config-data\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455881 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-combined-ca-bundle\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455898 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85p8f\" (UniqueName: \"kubernetes.io/projected/860d3f08-c471-43ac-bfb2-6c2171565946-kube-api-access-85p8f\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455922 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjhk\" (UniqueName: \"kubernetes.io/projected/054d3fa9-6f6e-4d14-8759-b626a8ff268b-kube-api-access-2fjhk\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455947 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-config-data-custom\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455964 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-config\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.455983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhn48\" (UniqueName: \"kubernetes.io/projected/caeb0702-4acd-43f7-bb98-659931b75efa-kube-api-access-xhn48\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.456018 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/860d3f08-c471-43ac-bfb2-6c2171565946-logs\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.456064 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/054d3fa9-6f6e-4d14-8759-b626a8ff268b-logs\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.456080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-dns-svc\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.456096 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.456123 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caeb0702-4acd-43f7-bb98-659931b75efa-logs\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.456589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caeb0702-4acd-43f7-bb98-659931b75efa-logs\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.457115 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/054d3fa9-6f6e-4d14-8759-b626a8ff268b-logs\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.462295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-config-data-custom\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.462888 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-config-data-custom\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.466374 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-config-data\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.476104 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-combined-ca-bundle\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.476596 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caeb0702-4acd-43f7-bb98-659931b75efa-combined-ca-bundle\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.476680 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054d3fa9-6f6e-4d14-8759-b626a8ff268b-config-data\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.481897 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjhk\" (UniqueName: \"kubernetes.io/projected/054d3fa9-6f6e-4d14-8759-b626a8ff268b-kube-api-access-2fjhk\") pod \"barbican-keystone-listener-f7b4998cd-bqbdt\" (UID: \"054d3fa9-6f6e-4d14-8759-b626a8ff268b\") " pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.484588 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhn48\" (UniqueName: \"kubernetes.io/projected/caeb0702-4acd-43f7-bb98-659931b75efa-kube-api-access-xhn48\") pod \"barbican-worker-7ff999c95-tks6q\" (UID: \"caeb0702-4acd-43f7-bb98-659931b75efa\") " pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.500490 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.518744 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7ff999c95-tks6q" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557176 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85p8f\" (UniqueName: \"kubernetes.io/projected/860d3f08-c471-43ac-bfb2-6c2171565946-kube-api-access-85p8f\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-config\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557299 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/860d3f08-c471-43ac-bfb2-6c2171565946-logs\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557357 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-dns-svc\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557396 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-624z2\" (UniqueName: \"kubernetes.io/projected/441b87bd-385d-439e-9248-b0f8dfac75c4-kube-api-access-624z2\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557434 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557453 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557477 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data-custom\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557495 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-combined-ca-bundle\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.557870 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/860d3f08-c471-43ac-bfb2-6c2171565946-logs\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.558250 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-config\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.558250 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.558296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-dns-svc\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.558551 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.562182 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-combined-ca-bundle\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.564217 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data-custom\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.576266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.579997 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-624z2\" (UniqueName: \"kubernetes.io/projected/441b87bd-385d-439e-9248-b0f8dfac75c4-kube-api-access-624z2\") pod \"dnsmasq-dns-5f4b849877-8n64j\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.580465 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85p8f\" (UniqueName: \"kubernetes.io/projected/860d3f08-c471-43ac-bfb2-6c2171565946-kube-api-access-85p8f\") pod \"barbican-api-84c8b67fb4-7nc22\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.587446 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.690000 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:16 crc kubenswrapper[4861]: I0219 14:42:16.910111 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f7b4998cd-bqbdt"] Feb 19 14:42:17 crc kubenswrapper[4861]: I0219 14:42:17.011989 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" event={"ID":"054d3fa9-6f6e-4d14-8759-b626a8ff268b","Type":"ContainerStarted","Data":"59ff852af0ae432963bda38e905e5a11118c2580dcefe1c3e91217d3d18f2a48"} Feb 19 14:42:17 crc kubenswrapper[4861]: I0219 14:42:17.028130 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7ff999c95-tks6q"] Feb 19 14:42:17 crc kubenswrapper[4861]: I0219 14:42:17.189040 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4b849877-8n64j"] Feb 19 14:42:17 crc kubenswrapper[4861]: W0219 14:42:17.190566 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod441b87bd_385d_439e_9248_b0f8dfac75c4.slice/crio-aa399d5a6ef47394f3f81549b2b761871a839c477c079220f397b7b8e7f69dab WatchSource:0}: Error finding container aa399d5a6ef47394f3f81549b2b761871a839c477c079220f397b7b8e7f69dab: Status 404 returned error can't find the container with id aa399d5a6ef47394f3f81549b2b761871a839c477c079220f397b7b8e7f69dab Feb 19 14:42:17 crc kubenswrapper[4861]: I0219 14:42:17.347304 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84c8b67fb4-7nc22"] Feb 19 14:42:17 crc kubenswrapper[4861]: W0219 14:42:17.351577 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod860d3f08_c471_43ac_bfb2_6c2171565946.slice/crio-8b951b0583cdb6b8003c30aaaaa75fdbadf4fd774560b799c264ce1ee41b2437 WatchSource:0}: Error finding container 8b951b0583cdb6b8003c30aaaaa75fdbadf4fd774560b799c264ce1ee41b2437: Status 404 returned error can't find the container with id 8b951b0583cdb6b8003c30aaaaa75fdbadf4fd774560b799c264ce1ee41b2437 Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.050536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c8b67fb4-7nc22" event={"ID":"860d3f08-c471-43ac-bfb2-6c2171565946","Type":"ContainerStarted","Data":"81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.050791 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c8b67fb4-7nc22" event={"ID":"860d3f08-c471-43ac-bfb2-6c2171565946","Type":"ContainerStarted","Data":"b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.050802 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c8b67fb4-7nc22" event={"ID":"860d3f08-c471-43ac-bfb2-6c2171565946","Type":"ContainerStarted","Data":"8b951b0583cdb6b8003c30aaaaa75fdbadf4fd774560b799c264ce1ee41b2437"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.051772 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.051792 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.070894 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" event={"ID":"054d3fa9-6f6e-4d14-8759-b626a8ff268b","Type":"ContainerStarted","Data":"fb70596c983dff57c9b2b0e1a32c40d88f6fd64e3cd4b832742590ca46f42ce6"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.070935 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" event={"ID":"054d3fa9-6f6e-4d14-8759-b626a8ff268b","Type":"ContainerStarted","Data":"6ce3ba52a7a6a75dcefcbe61843aa244d2a68f9ea0d59ecda74c7ee660975e7f"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.113788 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84c8b67fb4-7nc22" podStartSLOduration=2.113718127 podStartE2EDuration="2.113718127s" podCreationTimestamp="2026-02-19 14:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:42:18.074528726 +0000 UTC m=+5552.735631954" watchObservedRunningTime="2026-02-19 14:42:18.113718127 +0000 UTC m=+5552.774821355" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.114482 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-f7b4998cd-bqbdt" podStartSLOduration=2.114477498 podStartE2EDuration="2.114477498s" podCreationTimestamp="2026-02-19 14:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:42:18.112547636 +0000 UTC m=+5552.773650884" watchObservedRunningTime="2026-02-19 14:42:18.114477498 +0000 UTC m=+5552.775580726" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.119942 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7ff999c95-tks6q" event={"ID":"caeb0702-4acd-43f7-bb98-659931b75efa","Type":"ContainerStarted","Data":"fb227506cde89e5c4c096b7c8098922f0ac77a9bff30426d9dd42da2b0745773"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.119995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7ff999c95-tks6q" event={"ID":"caeb0702-4acd-43f7-bb98-659931b75efa","Type":"ContainerStarted","Data":"7cf1296eabd344917800eca11c7546fedca553d321646de34e67bdafa4b6045c"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.120007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7ff999c95-tks6q" event={"ID":"caeb0702-4acd-43f7-bb98-659931b75efa","Type":"ContainerStarted","Data":"b3585769d2c63bd4324b55123620470d8e1162f3fa548b1736cc0a8ec491fd95"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.127073 4861 generic.go:334] "Generic (PLEG): container finished" podID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerID="ef60757308e0f9d4d48a4c154a35fb0b22f239fddf8787fb88dd6bba2b643105" exitCode=0 Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.127111 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" event={"ID":"441b87bd-385d-439e-9248-b0f8dfac75c4","Type":"ContainerDied","Data":"ef60757308e0f9d4d48a4c154a35fb0b22f239fddf8787fb88dd6bba2b643105"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.127152 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" event={"ID":"441b87bd-385d-439e-9248-b0f8dfac75c4","Type":"ContainerStarted","Data":"aa399d5a6ef47394f3f81549b2b761871a839c477c079220f397b7b8e7f69dab"} Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.139558 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7ff999c95-tks6q" podStartSLOduration=2.13953965 podStartE2EDuration="2.13953965s" podCreationTimestamp="2026-02-19 14:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:42:18.137875105 +0000 UTC m=+5552.798978353" watchObservedRunningTime="2026-02-19 14:42:18.13953965 +0000 UTC m=+5552.800642878" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.894343 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c6db896c8-9k2wc"] Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.895976 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.897887 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.898733 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 14:42:18 crc kubenswrapper[4861]: I0219 14:42:18.958172 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c6db896c8-9k2wc"] Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.045689 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-config-data-custom\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.045765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f381bb-c798-4ace-a1d6-97da2274a601-logs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.045856 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-public-tls-certs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.045895 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-internal-tls-certs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.046018 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2ls\" (UniqueName: \"kubernetes.io/projected/f0f381bb-c798-4ace-a1d6-97da2274a601-kube-api-access-mv2ls\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.046107 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-combined-ca-bundle\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.046161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-config-data\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.137664 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" event={"ID":"441b87bd-385d-439e-9248-b0f8dfac75c4","Type":"ContainerStarted","Data":"914239992d790c752f0d279b19aa183b1e0253cc4200d9602d7f4a04f0667eaf"} Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.147853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-internal-tls-certs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.147904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-public-tls-certs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.147998 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2ls\" (UniqueName: \"kubernetes.io/projected/f0f381bb-c798-4ace-a1d6-97da2274a601-kube-api-access-mv2ls\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.148050 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-combined-ca-bundle\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.148077 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-config-data\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.148186 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-config-data-custom\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.148249 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f381bb-c798-4ace-a1d6-97da2274a601-logs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.149503 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f381bb-c798-4ace-a1d6-97da2274a601-logs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.153562 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-config-data\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.153963 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-internal-tls-certs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.155935 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-combined-ca-bundle\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.163991 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-public-tls-certs\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.167538 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" podStartSLOduration=3.167518835 podStartE2EDuration="3.167518835s" podCreationTimestamp="2026-02-19 14:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:42:19.162979973 +0000 UTC m=+5553.824083201" watchObservedRunningTime="2026-02-19 14:42:19.167518835 +0000 UTC m=+5553.828622053" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.168628 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0f381bb-c798-4ace-a1d6-97da2274a601-config-data-custom\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.174532 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2ls\" (UniqueName: \"kubernetes.io/projected/f0f381bb-c798-4ace-a1d6-97da2274a601-kube-api-access-mv2ls\") pod \"barbican-api-c6db896c8-9k2wc\" (UID: \"f0f381bb-c798-4ace-a1d6-97da2274a601\") " pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.215323 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:19 crc kubenswrapper[4861]: I0219 14:42:19.711363 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c6db896c8-9k2wc"] Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.152926 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c6db896c8-9k2wc" event={"ID":"f0f381bb-c798-4ace-a1d6-97da2274a601","Type":"ContainerStarted","Data":"dd1dd3fdce064a0b28c0896566dce5b5a49fa8dc1dade4167740da10e92db7af"} Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.153503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c6db896c8-9k2wc" event={"ID":"f0f381bb-c798-4ace-a1d6-97da2274a601","Type":"ContainerStarted","Data":"238fab13df93ad008e39d7efda93859fd19ec6ad22458b2c79a5f7130f3e0660"} Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.153905 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.153951 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.153977 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c6db896c8-9k2wc" event={"ID":"f0f381bb-c798-4ace-a1d6-97da2274a601","Type":"ContainerStarted","Data":"1dd99667788772281171ed7870a813c7088d0944a2ce30a2274c59dfef7f64b2"} Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.154009 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.183020 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c6db896c8-9k2wc" podStartSLOduration=2.183000185 podStartE2EDuration="2.183000185s" podCreationTimestamp="2026-02-19 14:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:42:20.174926538 +0000 UTC m=+5554.836029766" watchObservedRunningTime="2026-02-19 14:42:20.183000185 +0000 UTC m=+5554.844103413" Feb 19 14:42:20 crc kubenswrapper[4861]: I0219 14:42:20.978120 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:42:20 crc kubenswrapper[4861]: E0219 14:42:20.979025 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:42:25 crc kubenswrapper[4861]: I0219 14:42:25.596249 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:26 crc kubenswrapper[4861]: I0219 14:42:26.589609 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:42:26 crc kubenswrapper[4861]: I0219 14:42:26.668219 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8c84669-4wrkd"] Feb 19 14:42:26 crc kubenswrapper[4861]: I0219 14:42:26.668496 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerName="dnsmasq-dns" containerID="cri-o://fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87" gracePeriod=10 Feb 19 14:42:26 crc kubenswrapper[4861]: I0219 14:42:26.995532 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c6db896c8-9k2wc" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.071976 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84c8b67fb4-7nc22"] Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.072186 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" containerID="cri-o://b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6" gracePeriod=30 Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.072482 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" containerID="cri-o://81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc" gracePeriod=30 Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.081728 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": EOF" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.081727 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": EOF" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.082929 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": EOF" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.083247 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": EOF" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.168217 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.228293 4861 generic.go:334] "Generic (PLEG): container finished" podID="860d3f08-c471-43ac-bfb2-6c2171565946" containerID="b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6" exitCode=143 Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.228357 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c8b67fb4-7nc22" event={"ID":"860d3f08-c471-43ac-bfb2-6c2171565946","Type":"ContainerDied","Data":"b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6"} Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.232001 4861 generic.go:334] "Generic (PLEG): container finished" podID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerID="fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87" exitCode=0 Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.232028 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" event={"ID":"b6cd929e-ff2c-446d-a11f-b229278b55f9","Type":"ContainerDied","Data":"fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87"} Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.232045 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" event={"ID":"b6cd929e-ff2c-446d-a11f-b229278b55f9","Type":"ContainerDied","Data":"2d3a7cc6466803eeec3f5ba7e25530eb56038d86481a5fd628a39f52ecc6e563"} Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.232062 4861 scope.go:117] "RemoveContainer" containerID="fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.232174 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c84669-4wrkd" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.273510 4861 scope.go:117] "RemoveContainer" containerID="704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.335272 4861 scope.go:117] "RemoveContainer" containerID="fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87" Feb 19 14:42:27 crc kubenswrapper[4861]: E0219 14:42:27.335763 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87\": container with ID starting with fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87 not found: ID does not exist" containerID="fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.335793 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87"} err="failed to get container status \"fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87\": rpc error: code = NotFound desc = could not find container \"fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87\": container with ID starting with fcb00bc02cb54c714621ad320d0146b07294b822376f8176f68ed3450c9e7f87 not found: ID does not exist" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.335831 4861 scope.go:117] "RemoveContainer" containerID="704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d" Feb 19 14:42:27 crc kubenswrapper[4861]: E0219 14:42:27.336035 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d\": container with ID starting with 704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d not found: ID does not exist" containerID="704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.336068 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d"} err="failed to get container status \"704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d\": rpc error: code = NotFound desc = could not find container \"704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d\": container with ID starting with 704cefad4a61e84d95e132a449b548b54f574a78091e78b2cbebc73069097c7d not found: ID does not exist" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.357143 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-config\") pod \"b6cd929e-ff2c-446d-a11f-b229278b55f9\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.357235 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-dns-svc\") pod \"b6cd929e-ff2c-446d-a11f-b229278b55f9\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.357319 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-sb\") pod \"b6cd929e-ff2c-446d-a11f-b229278b55f9\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.357405 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xtgf\" (UniqueName: \"kubernetes.io/projected/b6cd929e-ff2c-446d-a11f-b229278b55f9-kube-api-access-6xtgf\") pod \"b6cd929e-ff2c-446d-a11f-b229278b55f9\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.357626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-nb\") pod \"b6cd929e-ff2c-446d-a11f-b229278b55f9\" (UID: \"b6cd929e-ff2c-446d-a11f-b229278b55f9\") " Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.383838 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd929e-ff2c-446d-a11f-b229278b55f9-kube-api-access-6xtgf" (OuterVolumeSpecName: "kube-api-access-6xtgf") pod "b6cd929e-ff2c-446d-a11f-b229278b55f9" (UID: "b6cd929e-ff2c-446d-a11f-b229278b55f9"). InnerVolumeSpecName "kube-api-access-6xtgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.412790 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6cd929e-ff2c-446d-a11f-b229278b55f9" (UID: "b6cd929e-ff2c-446d-a11f-b229278b55f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.421212 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6cd929e-ff2c-446d-a11f-b229278b55f9" (UID: "b6cd929e-ff2c-446d-a11f-b229278b55f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.451099 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-config" (OuterVolumeSpecName: "config") pod "b6cd929e-ff2c-446d-a11f-b229278b55f9" (UID: "b6cd929e-ff2c-446d-a11f-b229278b55f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.453136 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6cd929e-ff2c-446d-a11f-b229278b55f9" (UID: "b6cd929e-ff2c-446d-a11f-b229278b55f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.459481 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xtgf\" (UniqueName: \"kubernetes.io/projected/b6cd929e-ff2c-446d-a11f-b229278b55f9-kube-api-access-6xtgf\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.459507 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.459518 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.459528 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.459536 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd929e-ff2c-446d-a11f-b229278b55f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.559454 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8c84669-4wrkd"] Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.565489 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f8c84669-4wrkd"] Feb 19 14:42:27 crc kubenswrapper[4861]: I0219 14:42:27.990286 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" path="/var/lib/kubelet/pods/b6cd929e-ff2c-446d-a11f-b229278b55f9/volumes" Feb 19 14:42:31 crc kubenswrapper[4861]: I0219 14:42:31.547464 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": read tcp 10.217.0.2:54132->10.217.1.40:9311: read: connection reset by peer" Feb 19 14:42:31 crc kubenswrapper[4861]: I0219 14:42:31.547629 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": read tcp 10.217.0.2:54116->10.217.1.40:9311: read: connection reset by peer" Feb 19 14:42:31 crc kubenswrapper[4861]: I0219 14:42:31.691489 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": dial tcp 10.217.1.40:9311: connect: connection refused" Feb 19 14:42:31 crc kubenswrapper[4861]: I0219 14:42:31.691613 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c8b67fb4-7nc22" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.40:9311/healthcheck\": dial tcp 10.217.1.40:9311: connect: connection refused" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.044343 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.154699 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-combined-ca-bundle\") pod \"860d3f08-c471-43ac-bfb2-6c2171565946\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.154753 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85p8f\" (UniqueName: \"kubernetes.io/projected/860d3f08-c471-43ac-bfb2-6c2171565946-kube-api-access-85p8f\") pod \"860d3f08-c471-43ac-bfb2-6c2171565946\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.154873 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/860d3f08-c471-43ac-bfb2-6c2171565946-logs\") pod \"860d3f08-c471-43ac-bfb2-6c2171565946\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.154915 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data-custom\") pod \"860d3f08-c471-43ac-bfb2-6c2171565946\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.154991 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data\") pod \"860d3f08-c471-43ac-bfb2-6c2171565946\" (UID: \"860d3f08-c471-43ac-bfb2-6c2171565946\") " Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.155520 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860d3f08-c471-43ac-bfb2-6c2171565946-logs" (OuterVolumeSpecName: "logs") pod "860d3f08-c471-43ac-bfb2-6c2171565946" (UID: "860d3f08-c471-43ac-bfb2-6c2171565946"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.155891 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/860d3f08-c471-43ac-bfb2-6c2171565946-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.161186 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860d3f08-c471-43ac-bfb2-6c2171565946-kube-api-access-85p8f" (OuterVolumeSpecName: "kube-api-access-85p8f") pod "860d3f08-c471-43ac-bfb2-6c2171565946" (UID: "860d3f08-c471-43ac-bfb2-6c2171565946"). InnerVolumeSpecName "kube-api-access-85p8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.161405 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "860d3f08-c471-43ac-bfb2-6c2171565946" (UID: "860d3f08-c471-43ac-bfb2-6c2171565946"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.201660 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "860d3f08-c471-43ac-bfb2-6c2171565946" (UID: "860d3f08-c471-43ac-bfb2-6c2171565946"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.226514 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data" (OuterVolumeSpecName: "config-data") pod "860d3f08-c471-43ac-bfb2-6c2171565946" (UID: "860d3f08-c471-43ac-bfb2-6c2171565946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.258502 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.258561 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.258579 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/860d3f08-c471-43ac-bfb2-6c2171565946-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.258600 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85p8f\" (UniqueName: \"kubernetes.io/projected/860d3f08-c471-43ac-bfb2-6c2171565946-kube-api-access-85p8f\") on node \"crc\" DevicePath \"\"" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.282051 4861 generic.go:334] "Generic (PLEG): container finished" podID="860d3f08-c471-43ac-bfb2-6c2171565946" containerID="81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc" exitCode=0 Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.282100 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c8b67fb4-7nc22" event={"ID":"860d3f08-c471-43ac-bfb2-6c2171565946","Type":"ContainerDied","Data":"81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc"} Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.282168 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c8b67fb4-7nc22" event={"ID":"860d3f08-c471-43ac-bfb2-6c2171565946","Type":"ContainerDied","Data":"8b951b0583cdb6b8003c30aaaaa75fdbadf4fd774560b799c264ce1ee41b2437"} Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.282175 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c8b67fb4-7nc22" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.282195 4861 scope.go:117] "RemoveContainer" containerID="81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.308799 4861 scope.go:117] "RemoveContainer" containerID="b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.333626 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84c8b67fb4-7nc22"] Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.335062 4861 scope.go:117] "RemoveContainer" containerID="81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc" Feb 19 14:42:32 crc kubenswrapper[4861]: E0219 14:42:32.335526 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc\": container with ID starting with 81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc not found: ID does not exist" containerID="81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.335576 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc"} err="failed to get container status \"81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc\": rpc error: code = NotFound desc = could not find container \"81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc\": container with ID starting with 81633ec2333924c728872895e05ec167ac06da717e2b78ae89f8f2002e2d59bc not found: ID does not exist" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.335613 4861 scope.go:117] "RemoveContainer" containerID="b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6" Feb 19 14:42:32 crc kubenswrapper[4861]: E0219 14:42:32.336058 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6\": container with ID starting with b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6 not found: ID does not exist" containerID="b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.336109 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6"} err="failed to get container status \"b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6\": rpc error: code = NotFound desc = could not find container \"b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6\": container with ID starting with b3b25d25e7cc581112af9f09b261d988b5f074abc783fa556dc707c74bda74d6 not found: ID does not exist" Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.348961 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84c8b67fb4-7nc22"] Feb 19 14:42:32 crc kubenswrapper[4861]: I0219 14:42:32.977761 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:42:32 crc kubenswrapper[4861]: E0219 14:42:32.978286 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:42:33 crc kubenswrapper[4861]: I0219 14:42:33.988756 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" path="/var/lib/kubelet/pods/860d3f08-c471-43ac-bfb2-6c2171565946/volumes" Feb 19 14:42:47 crc kubenswrapper[4861]: I0219 14:42:47.977794 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:42:47 crc kubenswrapper[4861]: E0219 14:42:47.979039 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:43:02 crc kubenswrapper[4861]: I0219 14:43:02.431371 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lqkrc"] Feb 19 14:43:02 crc kubenswrapper[4861]: I0219 14:43:02.443598 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lqkrc"] Feb 19 14:43:02 crc kubenswrapper[4861]: I0219 14:43:02.978638 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:43:02 crc kubenswrapper[4861]: E0219 14:43:02.979017 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.848872 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pn49r"] Feb 19 14:43:03 crc kubenswrapper[4861]: E0219 14:43:03.849368 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerName="init" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849391 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerName="init" Feb 19 14:43:03 crc kubenswrapper[4861]: E0219 14:43:03.849437 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849446 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" Feb 19 14:43:03 crc kubenswrapper[4861]: E0219 14:43:03.849462 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerName="dnsmasq-dns" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849471 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerName="dnsmasq-dns" Feb 19 14:43:03 crc kubenswrapper[4861]: E0219 14:43:03.849492 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849502 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849703 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cd929e-ff2c-446d-a11f-b229278b55f9" containerName="dnsmasq-dns" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849728 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.849748 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="860d3f08-c471-43ac-bfb2-6c2171565946" containerName="barbican-api-log" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.850415 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.864114 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pn49r"] Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.917725 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7aa1b35-d0af-428e-9801-e73311fa6e9c-operator-scripts\") pod \"neutron-db-create-pn49r\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.917799 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfxd\" (UniqueName: \"kubernetes.io/projected/c7aa1b35-d0af-428e-9801-e73311fa6e9c-kube-api-access-hlfxd\") pod \"neutron-db-create-pn49r\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.947066 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b5a5-account-create-update-5vbwj"] Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.948019 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.953790 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 14:43:03 crc kubenswrapper[4861]: I0219 14:43:03.964111 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5a5-account-create-update-5vbwj"] Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.002890 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1994ba6-a5ca-410a-a998-e82972a07ecd" path="/var/lib/kubelet/pods/c1994ba6-a5ca-410a-a998-e82972a07ecd/volumes" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.020014 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6545x\" (UniqueName: \"kubernetes.io/projected/050b6f04-ae24-4a20-9e72-e46486c55baf-kube-api-access-6545x\") pod \"neutron-b5a5-account-create-update-5vbwj\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.020083 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050b6f04-ae24-4a20-9e72-e46486c55baf-operator-scripts\") pod \"neutron-b5a5-account-create-update-5vbwj\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.020119 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7aa1b35-d0af-428e-9801-e73311fa6e9c-operator-scripts\") pod \"neutron-db-create-pn49r\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.020158 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfxd\" (UniqueName: \"kubernetes.io/projected/c7aa1b35-d0af-428e-9801-e73311fa6e9c-kube-api-access-hlfxd\") pod \"neutron-db-create-pn49r\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.021225 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7aa1b35-d0af-428e-9801-e73311fa6e9c-operator-scripts\") pod \"neutron-db-create-pn49r\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.040633 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfxd\" (UniqueName: \"kubernetes.io/projected/c7aa1b35-d0af-428e-9801-e73311fa6e9c-kube-api-access-hlfxd\") pod \"neutron-db-create-pn49r\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.121662 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050b6f04-ae24-4a20-9e72-e46486c55baf-operator-scripts\") pod \"neutron-b5a5-account-create-update-5vbwj\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.122069 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6545x\" (UniqueName: \"kubernetes.io/projected/050b6f04-ae24-4a20-9e72-e46486c55baf-kube-api-access-6545x\") pod \"neutron-b5a5-account-create-update-5vbwj\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.122403 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050b6f04-ae24-4a20-9e72-e46486c55baf-operator-scripts\") pod \"neutron-b5a5-account-create-update-5vbwj\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.142474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6545x\" (UniqueName: \"kubernetes.io/projected/050b6f04-ae24-4a20-9e72-e46486c55baf-kube-api-access-6545x\") pod \"neutron-b5a5-account-create-update-5vbwj\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.170561 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.272411 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.749753 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pn49r"] Feb 19 14:43:04 crc kubenswrapper[4861]: I0219 14:43:04.857733 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5a5-account-create-update-5vbwj"] Feb 19 14:43:04 crc kubenswrapper[4861]: W0219 14:43:04.873728 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050b6f04_ae24_4a20_9e72_e46486c55baf.slice/crio-0bc7c7ab55f490592bf46a1cad8fc443824c0693b9f00ae9e35950eee286aa5b WatchSource:0}: Error finding container 0bc7c7ab55f490592bf46a1cad8fc443824c0693b9f00ae9e35950eee286aa5b: Status 404 returned error can't find the container with id 0bc7c7ab55f490592bf46a1cad8fc443824c0693b9f00ae9e35950eee286aa5b Feb 19 14:43:05 crc kubenswrapper[4861]: I0219 14:43:05.606376 4861 generic.go:334] "Generic (PLEG): container finished" podID="c7aa1b35-d0af-428e-9801-e73311fa6e9c" containerID="0ba5cc269e53ef935c359889eb10350c2acca5bb017c645e15db3985152ea7cf" exitCode=0 Feb 19 14:43:05 crc kubenswrapper[4861]: I0219 14:43:05.606485 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pn49r" event={"ID":"c7aa1b35-d0af-428e-9801-e73311fa6e9c","Type":"ContainerDied","Data":"0ba5cc269e53ef935c359889eb10350c2acca5bb017c645e15db3985152ea7cf"} Feb 19 14:43:05 crc kubenswrapper[4861]: I0219 14:43:05.606969 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pn49r" event={"ID":"c7aa1b35-d0af-428e-9801-e73311fa6e9c","Type":"ContainerStarted","Data":"6ab099bc47ba275096f5e68dcd5adc62f09bea149c17ee801a299efa1ebb4d9c"} Feb 19 14:43:05 crc kubenswrapper[4861]: I0219 14:43:05.609556 4861 generic.go:334] "Generic (PLEG): container finished" podID="050b6f04-ae24-4a20-9e72-e46486c55baf" containerID="3c80d0ddac8a0ffc2131c21c56cfc8f85641ddbe72ae4b9755d56dec5a72c1f7" exitCode=0 Feb 19 14:43:05 crc kubenswrapper[4861]: I0219 14:43:05.609631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5a5-account-create-update-5vbwj" event={"ID":"050b6f04-ae24-4a20-9e72-e46486c55baf","Type":"ContainerDied","Data":"3c80d0ddac8a0ffc2131c21c56cfc8f85641ddbe72ae4b9755d56dec5a72c1f7"} Feb 19 14:43:05 crc kubenswrapper[4861]: I0219 14:43:05.609676 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5a5-account-create-update-5vbwj" event={"ID":"050b6f04-ae24-4a20-9e72-e46486c55baf","Type":"ContainerStarted","Data":"0bc7c7ab55f490592bf46a1cad8fc443824c0693b9f00ae9e35950eee286aa5b"} Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.005290 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.012155 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.095550 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050b6f04-ae24-4a20-9e72-e46486c55baf-operator-scripts\") pod \"050b6f04-ae24-4a20-9e72-e46486c55baf\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.095708 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6545x\" (UniqueName: \"kubernetes.io/projected/050b6f04-ae24-4a20-9e72-e46486c55baf-kube-api-access-6545x\") pod \"050b6f04-ae24-4a20-9e72-e46486c55baf\" (UID: \"050b6f04-ae24-4a20-9e72-e46486c55baf\") " Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.095757 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfxd\" (UniqueName: \"kubernetes.io/projected/c7aa1b35-d0af-428e-9801-e73311fa6e9c-kube-api-access-hlfxd\") pod \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.095868 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7aa1b35-d0af-428e-9801-e73311fa6e9c-operator-scripts\") pod \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\" (UID: \"c7aa1b35-d0af-428e-9801-e73311fa6e9c\") " Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.096491 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050b6f04-ae24-4a20-9e72-e46486c55baf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "050b6f04-ae24-4a20-9e72-e46486c55baf" (UID: "050b6f04-ae24-4a20-9e72-e46486c55baf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.097537 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7aa1b35-d0af-428e-9801-e73311fa6e9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7aa1b35-d0af-428e-9801-e73311fa6e9c" (UID: "c7aa1b35-d0af-428e-9801-e73311fa6e9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.112599 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7aa1b35-d0af-428e-9801-e73311fa6e9c-kube-api-access-hlfxd" (OuterVolumeSpecName: "kube-api-access-hlfxd") pod "c7aa1b35-d0af-428e-9801-e73311fa6e9c" (UID: "c7aa1b35-d0af-428e-9801-e73311fa6e9c"). InnerVolumeSpecName "kube-api-access-hlfxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.112702 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050b6f04-ae24-4a20-9e72-e46486c55baf-kube-api-access-6545x" (OuterVolumeSpecName: "kube-api-access-6545x") pod "050b6f04-ae24-4a20-9e72-e46486c55baf" (UID: "050b6f04-ae24-4a20-9e72-e46486c55baf"). InnerVolumeSpecName "kube-api-access-6545x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.198497 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6545x\" (UniqueName: \"kubernetes.io/projected/050b6f04-ae24-4a20-9e72-e46486c55baf-kube-api-access-6545x\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.198556 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfxd\" (UniqueName: \"kubernetes.io/projected/c7aa1b35-d0af-428e-9801-e73311fa6e9c-kube-api-access-hlfxd\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.198581 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7aa1b35-d0af-428e-9801-e73311fa6e9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.198604 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050b6f04-ae24-4a20-9e72-e46486c55baf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.629973 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5a5-account-create-update-5vbwj" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.629973 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5a5-account-create-update-5vbwj" event={"ID":"050b6f04-ae24-4a20-9e72-e46486c55baf","Type":"ContainerDied","Data":"0bc7c7ab55f490592bf46a1cad8fc443824c0693b9f00ae9e35950eee286aa5b"} Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.630171 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc7c7ab55f490592bf46a1cad8fc443824c0693b9f00ae9e35950eee286aa5b" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.631878 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pn49r" event={"ID":"c7aa1b35-d0af-428e-9801-e73311fa6e9c","Type":"ContainerDied","Data":"6ab099bc47ba275096f5e68dcd5adc62f09bea149c17ee801a299efa1ebb4d9c"} Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.631925 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab099bc47ba275096f5e68dcd5adc62f09bea149c17ee801a299efa1ebb4d9c" Feb 19 14:43:07 crc kubenswrapper[4861]: I0219 14:43:07.631963 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pn49r" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.268365 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tp684"] Feb 19 14:43:09 crc kubenswrapper[4861]: E0219 14:43:09.268976 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050b6f04-ae24-4a20-9e72-e46486c55baf" containerName="mariadb-account-create-update" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.268988 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="050b6f04-ae24-4a20-9e72-e46486c55baf" containerName="mariadb-account-create-update" Feb 19 14:43:09 crc kubenswrapper[4861]: E0219 14:43:09.268997 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa1b35-d0af-428e-9801-e73311fa6e9c" containerName="mariadb-database-create" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.269003 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa1b35-d0af-428e-9801-e73311fa6e9c" containerName="mariadb-database-create" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.269145 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="050b6f04-ae24-4a20-9e72-e46486c55baf" containerName="mariadb-account-create-update" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.269165 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa1b35-d0af-428e-9801-e73311fa6e9c" containerName="mariadb-database-create" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.269697 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.272066 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-k6xf4" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.272439 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.273238 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.285887 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tp684"] Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.337922 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-config\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.338070 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdzc\" (UniqueName: \"kubernetes.io/projected/0cbdc791-bf33-404f-a1ef-179edd71b787-kube-api-access-zbdzc\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.338179 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-combined-ca-bundle\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.439626 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-combined-ca-bundle\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.439765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-config\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.439802 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbdzc\" (UniqueName: \"kubernetes.io/projected/0cbdc791-bf33-404f-a1ef-179edd71b787-kube-api-access-zbdzc\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.444833 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-combined-ca-bundle\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.454832 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbdzc\" (UniqueName: \"kubernetes.io/projected/0cbdc791-bf33-404f-a1ef-179edd71b787-kube-api-access-zbdzc\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.458211 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-config\") pod \"neutron-db-sync-tp684\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.587342 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:09 crc kubenswrapper[4861]: I0219 14:43:09.856478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tp684"] Feb 19 14:43:10 crc kubenswrapper[4861]: I0219 14:43:10.659913 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp684" event={"ID":"0cbdc791-bf33-404f-a1ef-179edd71b787","Type":"ContainerStarted","Data":"78691913cbec1ce0ed55c6a435fddd8520c983a4f81cfa73d094d402a7840043"} Feb 19 14:43:10 crc kubenswrapper[4861]: I0219 14:43:10.659965 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp684" event={"ID":"0cbdc791-bf33-404f-a1ef-179edd71b787","Type":"ContainerStarted","Data":"f4d725cd0a6faa0583fd09496d47b769f40231fb2fc88dbd24b604938d185f41"} Feb 19 14:43:15 crc kubenswrapper[4861]: I0219 14:43:15.991241 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:43:16 crc kubenswrapper[4861]: I0219 14:43:16.728297 4861 generic.go:334] "Generic (PLEG): container finished" podID="0cbdc791-bf33-404f-a1ef-179edd71b787" containerID="78691913cbec1ce0ed55c6a435fddd8520c983a4f81cfa73d094d402a7840043" exitCode=0 Feb 19 14:43:16 crc kubenswrapper[4861]: I0219 14:43:16.728588 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp684" event={"ID":"0cbdc791-bf33-404f-a1ef-179edd71b787","Type":"ContainerDied","Data":"78691913cbec1ce0ed55c6a435fddd8520c983a4f81cfa73d094d402a7840043"} Feb 19 14:43:16 crc kubenswrapper[4861]: I0219 14:43:16.732835 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"857acc905a3020c924da1c5bc09451d3afdf4f6b0afc35920779725d181fa1fb"} Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.224154 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.326400 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-combined-ca-bundle\") pod \"0cbdc791-bf33-404f-a1ef-179edd71b787\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.326844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-config\") pod \"0cbdc791-bf33-404f-a1ef-179edd71b787\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.326906 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbdzc\" (UniqueName: \"kubernetes.io/projected/0cbdc791-bf33-404f-a1ef-179edd71b787-kube-api-access-zbdzc\") pod \"0cbdc791-bf33-404f-a1ef-179edd71b787\" (UID: \"0cbdc791-bf33-404f-a1ef-179edd71b787\") " Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.344635 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cbdc791-bf33-404f-a1ef-179edd71b787-kube-api-access-zbdzc" (OuterVolumeSpecName: "kube-api-access-zbdzc") pod "0cbdc791-bf33-404f-a1ef-179edd71b787" (UID: "0cbdc791-bf33-404f-a1ef-179edd71b787"). InnerVolumeSpecName "kube-api-access-zbdzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.365336 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cbdc791-bf33-404f-a1ef-179edd71b787" (UID: "0cbdc791-bf33-404f-a1ef-179edd71b787"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.377528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-config" (OuterVolumeSpecName: "config") pod "0cbdc791-bf33-404f-a1ef-179edd71b787" (UID: "0cbdc791-bf33-404f-a1ef-179edd71b787"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.429315 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.429359 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbdzc\" (UniqueName: \"kubernetes.io/projected/0cbdc791-bf33-404f-a1ef-179edd71b787-kube-api-access-zbdzc\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.429375 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbdc791-bf33-404f-a1ef-179edd71b787-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.756778 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp684" event={"ID":"0cbdc791-bf33-404f-a1ef-179edd71b787","Type":"ContainerDied","Data":"f4d725cd0a6faa0583fd09496d47b769f40231fb2fc88dbd24b604938d185f41"} Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.756837 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d725cd0a6faa0583fd09496d47b769f40231fb2fc88dbd24b604938d185f41" Feb 19 14:43:18 crc kubenswrapper[4861]: I0219 14:43:18.756852 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp684" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.075802 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c87879445-g9jvc"] Feb 19 14:43:19 crc kubenswrapper[4861]: E0219 14:43:19.076388 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbdc791-bf33-404f-a1ef-179edd71b787" containerName="neutron-db-sync" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.076405 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbdc791-bf33-404f-a1ef-179edd71b787" containerName="neutron-db-sync" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.076597 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cbdc791-bf33-404f-a1ef-179edd71b787" containerName="neutron-db-sync" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.077401 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.098526 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c87879445-g9jvc"] Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.151086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-dns-svc\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.151135 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.151204 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.151269 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn5fn\" (UniqueName: \"kubernetes.io/projected/5584a713-cb0e-4f6c-bb04-86231af4306e-kube-api-access-tn5fn\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.151297 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-config\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.157974 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-864d776f98-pqsqf"] Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.159297 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.164639 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-k6xf4" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.164653 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.164839 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.164859 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.168362 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-864d776f98-pqsqf"] Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.252918 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253014 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-ovndb-tls-certs\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253062 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn5fn\" (UniqueName: \"kubernetes.io/projected/5584a713-cb0e-4f6c-bb04-86231af4306e-kube-api-access-tn5fn\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253083 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-httpd-config\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253139 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-combined-ca-bundle\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253159 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lclx\" (UniqueName: \"kubernetes.io/projected/a077ecb8-bed9-445f-b3dd-c2b163da8df4-kube-api-access-9lclx\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253217 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-config\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253246 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-config\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-dns-svc\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.253857 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.254049 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-config\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.254234 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.254263 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-dns-svc\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.270209 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn5fn\" (UniqueName: \"kubernetes.io/projected/5584a713-cb0e-4f6c-bb04-86231af4306e-kube-api-access-tn5fn\") pod \"dnsmasq-dns-7c87879445-g9jvc\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.354867 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-ovndb-tls-certs\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.354929 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-httpd-config\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.354953 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-combined-ca-bundle\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.354971 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lclx\" (UniqueName: \"kubernetes.io/projected/a077ecb8-bed9-445f-b3dd-c2b163da8df4-kube-api-access-9lclx\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.355001 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-config\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.362016 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-combined-ca-bundle\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.363920 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-httpd-config\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.368206 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-config\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.372839 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-ovndb-tls-certs\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.428866 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lclx\" (UniqueName: \"kubernetes.io/projected/a077ecb8-bed9-445f-b3dd-c2b163da8df4-kube-api-access-9lclx\") pod \"neutron-864d776f98-pqsqf\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.433593 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.479041 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:19 crc kubenswrapper[4861]: I0219 14:43:19.886244 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c87879445-g9jvc"] Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.094658 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-864d776f98-pqsqf"] Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.771207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864d776f98-pqsqf" event={"ID":"a077ecb8-bed9-445f-b3dd-c2b163da8df4","Type":"ContainerStarted","Data":"faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93"} Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.771616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864d776f98-pqsqf" event={"ID":"a077ecb8-bed9-445f-b3dd-c2b163da8df4","Type":"ContainerStarted","Data":"ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3"} Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.771639 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864d776f98-pqsqf" event={"ID":"a077ecb8-bed9-445f-b3dd-c2b163da8df4","Type":"ContainerStarted","Data":"b0b798c4527acae3a0b1aff1a51c03766e889f878a54ec4f0f6fc16aced5d206"} Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.772067 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.774070 4861 generic.go:334] "Generic (PLEG): container finished" podID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerID="acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72" exitCode=0 Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.774115 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" event={"ID":"5584a713-cb0e-4f6c-bb04-86231af4306e","Type":"ContainerDied","Data":"acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72"} Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.774143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" event={"ID":"5584a713-cb0e-4f6c-bb04-86231af4306e","Type":"ContainerStarted","Data":"d874c6ccd32b74491b34883642cb3d60efd25224eca350f83e9ebd570080cbae"} Feb 19 14:43:20 crc kubenswrapper[4861]: I0219 14:43:20.799205 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-864d776f98-pqsqf" podStartSLOduration=1.799185621 podStartE2EDuration="1.799185621s" podCreationTimestamp="2026-02-19 14:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:43:20.792954714 +0000 UTC m=+5615.454057942" watchObservedRunningTime="2026-02-19 14:43:20.799185621 +0000 UTC m=+5615.460288849" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.251640 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d5dc9cd8f-tqndz"] Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.253858 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.259260 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.259474 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.264957 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d5dc9cd8f-tqndz"] Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.387369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-combined-ca-bundle\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.387559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-public-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.387624 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-ovndb-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.387814 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-httpd-config\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.387958 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-internal-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.388019 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2ht\" (UniqueName: \"kubernetes.io/projected/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-kube-api-access-8g2ht\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.388084 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-config\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489537 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-httpd-config\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489603 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-internal-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g2ht\" (UniqueName: \"kubernetes.io/projected/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-kube-api-access-8g2ht\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-config\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489728 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-combined-ca-bundle\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489756 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-public-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.489773 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-ovndb-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.494099 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-ovndb-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.494469 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-public-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.494578 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-internal-tls-certs\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.495988 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-httpd-config\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.501167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-combined-ca-bundle\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.501211 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-config\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.510891 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g2ht\" (UniqueName: \"kubernetes.io/projected/1432e96f-0b8e-465e-b7dc-a70f5dd0b010-kube-api-access-8g2ht\") pod \"neutron-6d5dc9cd8f-tqndz\" (UID: \"1432e96f-0b8e-465e-b7dc-a70f5dd0b010\") " pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.602545 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.787704 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" event={"ID":"5584a713-cb0e-4f6c-bb04-86231af4306e","Type":"ContainerStarted","Data":"6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c"} Feb 19 14:43:21 crc kubenswrapper[4861]: I0219 14:43:21.788332 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:22 crc kubenswrapper[4861]: I0219 14:43:22.159694 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" podStartSLOduration=3.159666983 podStartE2EDuration="3.159666983s" podCreationTimestamp="2026-02-19 14:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:43:21.805611279 +0000 UTC m=+5616.466714507" watchObservedRunningTime="2026-02-19 14:43:22.159666983 +0000 UTC m=+5616.820770231" Feb 19 14:43:22 crc kubenswrapper[4861]: I0219 14:43:22.163121 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d5dc9cd8f-tqndz"] Feb 19 14:43:22 crc kubenswrapper[4861]: W0219 14:43:22.167973 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1432e96f_0b8e_465e_b7dc_a70f5dd0b010.slice/crio-d87119b55330821ca33805298f3204c931e4aab2735bde2ffb76dc9204ac4095 WatchSource:0}: Error finding container d87119b55330821ca33805298f3204c931e4aab2735bde2ffb76dc9204ac4095: Status 404 returned error can't find the container with id d87119b55330821ca33805298f3204c931e4aab2735bde2ffb76dc9204ac4095 Feb 19 14:43:22 crc kubenswrapper[4861]: I0219 14:43:22.794993 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5dc9cd8f-tqndz" event={"ID":"1432e96f-0b8e-465e-b7dc-a70f5dd0b010","Type":"ContainerStarted","Data":"11fc2a6cb18406cb3f5665a2dd7346a403f7493d2d8afa0ed1bf7495a3a78e6f"} Feb 19 14:43:22 crc kubenswrapper[4861]: I0219 14:43:22.795382 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5dc9cd8f-tqndz" event={"ID":"1432e96f-0b8e-465e-b7dc-a70f5dd0b010","Type":"ContainerStarted","Data":"9eae151b9036f50f16a20502bb6469130de89ef037be4a12f05716e5f15a70b1"} Feb 19 14:43:22 crc kubenswrapper[4861]: I0219 14:43:22.795409 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5dc9cd8f-tqndz" event={"ID":"1432e96f-0b8e-465e-b7dc-a70f5dd0b010","Type":"ContainerStarted","Data":"d87119b55330821ca33805298f3204c931e4aab2735bde2ffb76dc9204ac4095"} Feb 19 14:43:23 crc kubenswrapper[4861]: I0219 14:43:23.808572 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:29 crc kubenswrapper[4861]: I0219 14:43:29.436827 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:43:29 crc kubenswrapper[4861]: I0219 14:43:29.468763 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d5dc9cd8f-tqndz" podStartSLOduration=8.468724805 podStartE2EDuration="8.468724805s" podCreationTimestamp="2026-02-19 14:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:43:22.819869125 +0000 UTC m=+5617.480972353" watchObservedRunningTime="2026-02-19 14:43:29.468724805 +0000 UTC m=+5624.129828073" Feb 19 14:43:29 crc kubenswrapper[4861]: I0219 14:43:29.508011 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4b849877-8n64j"] Feb 19 14:43:29 crc kubenswrapper[4861]: I0219 14:43:29.508715 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerName="dnsmasq-dns" containerID="cri-o://914239992d790c752f0d279b19aa183b1e0253cc4200d9602d7f4a04f0667eaf" gracePeriod=10 Feb 19 14:43:29 crc kubenswrapper[4861]: I0219 14:43:29.871384 4861 generic.go:334] "Generic (PLEG): container finished" podID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerID="914239992d790c752f0d279b19aa183b1e0253cc4200d9602d7f4a04f0667eaf" exitCode=0 Feb 19 14:43:29 crc kubenswrapper[4861]: I0219 14:43:29.871454 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" event={"ID":"441b87bd-385d-439e-9248-b0f8dfac75c4","Type":"ContainerDied","Data":"914239992d790c752f0d279b19aa183b1e0253cc4200d9602d7f4a04f0667eaf"} Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:29.998689 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.071671 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-config\") pod \"441b87bd-385d-439e-9248-b0f8dfac75c4\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.071718 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-nb\") pod \"441b87bd-385d-439e-9248-b0f8dfac75c4\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.071766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-sb\") pod \"441b87bd-385d-439e-9248-b0f8dfac75c4\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.071832 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-dns-svc\") pod \"441b87bd-385d-439e-9248-b0f8dfac75c4\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.071897 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-624z2\" (UniqueName: \"kubernetes.io/projected/441b87bd-385d-439e-9248-b0f8dfac75c4-kube-api-access-624z2\") pod \"441b87bd-385d-439e-9248-b0f8dfac75c4\" (UID: \"441b87bd-385d-439e-9248-b0f8dfac75c4\") " Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.092557 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441b87bd-385d-439e-9248-b0f8dfac75c4-kube-api-access-624z2" (OuterVolumeSpecName: "kube-api-access-624z2") pod "441b87bd-385d-439e-9248-b0f8dfac75c4" (UID: "441b87bd-385d-439e-9248-b0f8dfac75c4"). InnerVolumeSpecName "kube-api-access-624z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.173629 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-624z2\" (UniqueName: \"kubernetes.io/projected/441b87bd-385d-439e-9248-b0f8dfac75c4-kube-api-access-624z2\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.182096 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "441b87bd-385d-439e-9248-b0f8dfac75c4" (UID: "441b87bd-385d-439e-9248-b0f8dfac75c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.184306 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-config" (OuterVolumeSpecName: "config") pod "441b87bd-385d-439e-9248-b0f8dfac75c4" (UID: "441b87bd-385d-439e-9248-b0f8dfac75c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.210843 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "441b87bd-385d-439e-9248-b0f8dfac75c4" (UID: "441b87bd-385d-439e-9248-b0f8dfac75c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.211101 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "441b87bd-385d-439e-9248-b0f8dfac75c4" (UID: "441b87bd-385d-439e-9248-b0f8dfac75c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.274742 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.274779 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.274792 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.274800 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/441b87bd-385d-439e-9248-b0f8dfac75c4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.881319 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" event={"ID":"441b87bd-385d-439e-9248-b0f8dfac75c4","Type":"ContainerDied","Data":"aa399d5a6ef47394f3f81549b2b761871a839c477c079220f397b7b8e7f69dab"} Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.881375 4861 scope.go:117] "RemoveContainer" containerID="914239992d790c752f0d279b19aa183b1e0253cc4200d9602d7f4a04f0667eaf" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.881527 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4b849877-8n64j" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.915600 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4b849877-8n64j"] Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.916156 4861 scope.go:117] "RemoveContainer" containerID="ef60757308e0f9d4d48a4c154a35fb0b22f239fddf8787fb88dd6bba2b643105" Feb 19 14:43:30 crc kubenswrapper[4861]: I0219 14:43:30.923489 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f4b849877-8n64j"] Feb 19 14:43:31 crc kubenswrapper[4861]: I0219 14:43:31.997210 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" path="/var/lib/kubelet/pods/441b87bd-385d-439e-9248-b0f8dfac75c4/volumes" Feb 19 14:43:49 crc kubenswrapper[4861]: I0219 14:43:49.500736 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:51 crc kubenswrapper[4861]: I0219 14:43:51.698916 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d5dc9cd8f-tqndz" Feb 19 14:43:51 crc kubenswrapper[4861]: I0219 14:43:51.749278 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-864d776f98-pqsqf"] Feb 19 14:43:51 crc kubenswrapper[4861]: I0219 14:43:51.749499 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-864d776f98-pqsqf" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-api" containerID="cri-o://ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3" gracePeriod=30 Feb 19 14:43:51 crc kubenswrapper[4861]: I0219 14:43:51.749600 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-864d776f98-pqsqf" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-httpd" containerID="cri-o://faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93" gracePeriod=30 Feb 19 14:43:52 crc kubenswrapper[4861]: I0219 14:43:52.123744 4861 generic.go:334] "Generic (PLEG): container finished" podID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerID="faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93" exitCode=0 Feb 19 14:43:52 crc kubenswrapper[4861]: I0219 14:43:52.123800 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864d776f98-pqsqf" event={"ID":"a077ecb8-bed9-445f-b3dd-c2b163da8df4","Type":"ContainerDied","Data":"faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93"} Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.229569 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nnrlf"] Feb 19 14:43:53 crc kubenswrapper[4861]: E0219 14:43:53.230208 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerName="dnsmasq-dns" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.230222 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerName="dnsmasq-dns" Feb 19 14:43:53 crc kubenswrapper[4861]: E0219 14:43:53.230243 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerName="init" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.230249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerName="init" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.230460 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="441b87bd-385d-439e-9248-b0f8dfac75c4" containerName="dnsmasq-dns" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.231657 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.257627 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nnrlf"] Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.356311 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9ba32b-3d06-43dd-aa42-af6f400940d4-catalog-content\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.356404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9ba32b-3d06-43dd-aa42-af6f400940d4-utilities\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.356557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbl7\" (UniqueName: \"kubernetes.io/projected/1a9ba32b-3d06-43dd-aa42-af6f400940d4-kube-api-access-ngbl7\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.458753 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbl7\" (UniqueName: \"kubernetes.io/projected/1a9ba32b-3d06-43dd-aa42-af6f400940d4-kube-api-access-ngbl7\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.458912 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9ba32b-3d06-43dd-aa42-af6f400940d4-catalog-content\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.459004 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9ba32b-3d06-43dd-aa42-af6f400940d4-utilities\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.459888 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9ba32b-3d06-43dd-aa42-af6f400940d4-utilities\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.460682 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9ba32b-3d06-43dd-aa42-af6f400940d4-catalog-content\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.492296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbl7\" (UniqueName: \"kubernetes.io/projected/1a9ba32b-3d06-43dd-aa42-af6f400940d4-kube-api-access-ngbl7\") pod \"certified-operators-nnrlf\" (UID: \"1a9ba32b-3d06-43dd-aa42-af6f400940d4\") " pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:53 crc kubenswrapper[4861]: I0219 14:43:53.547439 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:43:54 crc kubenswrapper[4861]: I0219 14:43:54.029631 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nnrlf"] Feb 19 14:43:54 crc kubenswrapper[4861]: I0219 14:43:54.137695 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nnrlf" event={"ID":"1a9ba32b-3d06-43dd-aa42-af6f400940d4","Type":"ContainerStarted","Data":"85ec97b5952e2d1a99cf792df288c108e801b5d6574420565eac546dd5bb843a"} Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.156896 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a9ba32b-3d06-43dd-aa42-af6f400940d4" containerID="9b868492d37a446e22e07c486c6386fcdbde30cddf74067ba35858eea80ad1b6" exitCode=0 Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.156975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nnrlf" event={"ID":"1a9ba32b-3d06-43dd-aa42-af6f400940d4","Type":"ContainerDied","Data":"9b868492d37a446e22e07c486c6386fcdbde30cddf74067ba35858eea80ad1b6"} Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.779175 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.913577 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-config\") pod \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.913881 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-httpd-config\") pod \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.913910 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-ovndb-tls-certs\") pod \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.914018 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lclx\" (UniqueName: \"kubernetes.io/projected/a077ecb8-bed9-445f-b3dd-c2b163da8df4-kube-api-access-9lclx\") pod \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.914039 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-combined-ca-bundle\") pod \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\" (UID: \"a077ecb8-bed9-445f-b3dd-c2b163da8df4\") " Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.919545 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a077ecb8-bed9-445f-b3dd-c2b163da8df4" (UID: "a077ecb8-bed9-445f-b3dd-c2b163da8df4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.928562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a077ecb8-bed9-445f-b3dd-c2b163da8df4-kube-api-access-9lclx" (OuterVolumeSpecName: "kube-api-access-9lclx") pod "a077ecb8-bed9-445f-b3dd-c2b163da8df4" (UID: "a077ecb8-bed9-445f-b3dd-c2b163da8df4"). InnerVolumeSpecName "kube-api-access-9lclx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.956962 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-config" (OuterVolumeSpecName: "config") pod "a077ecb8-bed9-445f-b3dd-c2b163da8df4" (UID: "a077ecb8-bed9-445f-b3dd-c2b163da8df4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.974303 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a077ecb8-bed9-445f-b3dd-c2b163da8df4" (UID: "a077ecb8-bed9-445f-b3dd-c2b163da8df4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:43:55 crc kubenswrapper[4861]: I0219 14:43:55.974596 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a077ecb8-bed9-445f-b3dd-c2b163da8df4" (UID: "a077ecb8-bed9-445f-b3dd-c2b163da8df4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.019825 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.019870 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.019886 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.019899 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lclx\" (UniqueName: \"kubernetes.io/projected/a077ecb8-bed9-445f-b3dd-c2b163da8df4-kube-api-access-9lclx\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.019908 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a077ecb8-bed9-445f-b3dd-c2b163da8df4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.169148 4861 generic.go:334] "Generic (PLEG): container finished" podID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerID="ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3" exitCode=0 Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.169195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864d776f98-pqsqf" event={"ID":"a077ecb8-bed9-445f-b3dd-c2b163da8df4","Type":"ContainerDied","Data":"ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3"} Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.169226 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864d776f98-pqsqf" event={"ID":"a077ecb8-bed9-445f-b3dd-c2b163da8df4","Type":"ContainerDied","Data":"b0b798c4527acae3a0b1aff1a51c03766e889f878a54ec4f0f6fc16aced5d206"} Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.169246 4861 scope.go:117] "RemoveContainer" containerID="faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.169310 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864d776f98-pqsqf" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.202899 4861 scope.go:117] "RemoveContainer" containerID="ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.208498 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-864d776f98-pqsqf"] Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.218170 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-864d776f98-pqsqf"] Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.225104 4861 scope.go:117] "RemoveContainer" containerID="faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93" Feb 19 14:43:56 crc kubenswrapper[4861]: E0219 14:43:56.225554 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93\": container with ID starting with faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93 not found: ID does not exist" containerID="faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.225588 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93"} err="failed to get container status \"faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93\": rpc error: code = NotFound desc = could not find container \"faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93\": container with ID starting with faf8f03706240364ac68f90661a6506f31274b574898381f811b42952c1b0d93 not found: ID does not exist" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.225613 4861 scope.go:117] "RemoveContainer" containerID="ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3" Feb 19 14:43:56 crc kubenswrapper[4861]: E0219 14:43:56.225922 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3\": container with ID starting with ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3 not found: ID does not exist" containerID="ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3" Feb 19 14:43:56 crc kubenswrapper[4861]: I0219 14:43:56.225942 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3"} err="failed to get container status \"ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3\": rpc error: code = NotFound desc = could not find container \"ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3\": container with ID starting with ff4e45d1f824761cf7accc1789111a997f10d1402ae2354c701faa3ce9be97b3 not found: ID does not exist" Feb 19 14:43:57 crc kubenswrapper[4861]: I0219 14:43:57.990801 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" path="/var/lib/kubelet/pods/a077ecb8-bed9-445f-b3dd-c2b163da8df4/volumes" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.356390 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-tz57m"] Feb 19 14:44:01 crc kubenswrapper[4861]: E0219 14:44:01.357532 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-httpd" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.357556 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-httpd" Feb 19 14:44:01 crc kubenswrapper[4861]: E0219 14:44:01.357596 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-api" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.357608 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-api" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.357893 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-api" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.358703 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a077ecb8-bed9-445f-b3dd-c2b163da8df4" containerName="neutron-httpd" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.360869 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.364677 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ct4m4" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.364913 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.365024 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.365641 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.365747 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406514 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-combined-ca-bundle\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38973f38-cefa-4543-807f-da43a6a21e7b-etc-swift\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406700 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5s9\" (UniqueName: \"kubernetes.io/projected/38973f38-cefa-4543-807f-da43a6a21e7b-kube-api-access-hf5s9\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-swiftconf\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406787 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-scripts\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406808 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-ring-data-devices\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.406826 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-dispersionconf\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.414871 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tz57m"] Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.487234 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78696b78bf-8w2t9"] Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.491333 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.507978 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-dns-svc\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508012 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-sb\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508058 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-nb\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38973f38-cefa-4543-807f-da43a6a21e7b-etc-swift\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508126 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-config\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508150 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5s9\" (UniqueName: \"kubernetes.io/projected/38973f38-cefa-4543-807f-da43a6a21e7b-kube-api-access-hf5s9\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508194 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-swiftconf\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508217 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-scripts\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508233 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8fj\" (UniqueName: \"kubernetes.io/projected/3574166a-39e5-4b93-bdfc-ecef1a067f5c-kube-api-access-9l8fj\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-ring-data-devices\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508272 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-dispersionconf\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.508296 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-combined-ca-bundle\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.530634 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38973f38-cefa-4543-807f-da43a6a21e7b-etc-swift\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.532606 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-scripts\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.536167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-combined-ca-bundle\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.536569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-ring-data-devices\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.537623 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-swiftconf\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.547266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-dispersionconf\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.557031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5s9\" (UniqueName: \"kubernetes.io/projected/38973f38-cefa-4543-807f-da43a6a21e7b-kube-api-access-hf5s9\") pod \"swift-ring-rebalance-tz57m\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.588506 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78696b78bf-8w2t9"] Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.630896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8fj\" (UniqueName: \"kubernetes.io/projected/3574166a-39e5-4b93-bdfc-ecef1a067f5c-kube-api-access-9l8fj\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.631179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-dns-svc\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.631264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-sb\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.631354 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-nb\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.631464 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-config\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.632296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-config\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.635374 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-sb\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.635570 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-nb\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.639561 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-dns-svc\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.649303 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8fj\" (UniqueName: \"kubernetes.io/projected/3574166a-39e5-4b93-bdfc-ecef1a067f5c-kube-api-access-9l8fj\") pod \"dnsmasq-dns-78696b78bf-8w2t9\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.692190 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:01 crc kubenswrapper[4861]: I0219 14:44:01.904580 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:02 crc kubenswrapper[4861]: I0219 14:44:02.168281 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tz57m"] Feb 19 14:44:02 crc kubenswrapper[4861]: W0219 14:44:02.169682 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38973f38_cefa_4543_807f_da43a6a21e7b.slice/crio-854344b79c32a161ecab836ef6a97bb2060b7e8818147572e0017a7b732d3aed WatchSource:0}: Error finding container 854344b79c32a161ecab836ef6a97bb2060b7e8818147572e0017a7b732d3aed: Status 404 returned error can't find the container with id 854344b79c32a161ecab836ef6a97bb2060b7e8818147572e0017a7b732d3aed Feb 19 14:44:02 crc kubenswrapper[4861]: I0219 14:44:02.228468 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a9ba32b-3d06-43dd-aa42-af6f400940d4" containerID="54d687d498c1aa2e849eb252669625b947d8fd8f15516b32bb21781d3196969b" exitCode=0 Feb 19 14:44:02 crc kubenswrapper[4861]: I0219 14:44:02.228537 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nnrlf" event={"ID":"1a9ba32b-3d06-43dd-aa42-af6f400940d4","Type":"ContainerDied","Data":"54d687d498c1aa2e849eb252669625b947d8fd8f15516b32bb21781d3196969b"} Feb 19 14:44:02 crc kubenswrapper[4861]: I0219 14:44:02.230045 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tz57m" event={"ID":"38973f38-cefa-4543-807f-da43a6a21e7b","Type":"ContainerStarted","Data":"854344b79c32a161ecab836ef6a97bb2060b7e8818147572e0017a7b732d3aed"} Feb 19 14:44:02 crc kubenswrapper[4861]: I0219 14:44:02.362308 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78696b78bf-8w2t9"] Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.237891 4861 generic.go:334] "Generic (PLEG): container finished" podID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerID="6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6" exitCode=0 Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.237934 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" event={"ID":"3574166a-39e5-4b93-bdfc-ecef1a067f5c","Type":"ContainerDied","Data":"6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6"} Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.238168 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" event={"ID":"3574166a-39e5-4b93-bdfc-ecef1a067f5c","Type":"ContainerStarted","Data":"19e023803c2805f9ac89c1da6df24ab78a347af0d6a58b1ea63ef764fd5fd1c6"} Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.240260 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nnrlf" event={"ID":"1a9ba32b-3d06-43dd-aa42-af6f400940d4","Type":"ContainerStarted","Data":"e6a0aec463a5225cb2bd989a933ddcd2d332bcac48535dd0d17b6b2c570ffa94"} Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.246822 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tz57m" event={"ID":"38973f38-cefa-4543-807f-da43a6a21e7b","Type":"ContainerStarted","Data":"1ce6c132e5a19d7923d4a5f10c6fdce302fb68dd096c93853c3efbcbccab43fb"} Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.344908 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nnrlf" podStartSLOduration=2.896021805 podStartE2EDuration="10.344888676s" podCreationTimestamp="2026-02-19 14:43:53 +0000 UTC" firstStartedPulling="2026-02-19 14:43:55.159873565 +0000 UTC m=+5649.820976813" lastFinishedPulling="2026-02-19 14:44:02.608740456 +0000 UTC m=+5657.269843684" observedRunningTime="2026-02-19 14:44:03.333958363 +0000 UTC m=+5657.995061591" watchObservedRunningTime="2026-02-19 14:44:03.344888676 +0000 UTC m=+5658.005991904" Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.548272 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.548318 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:44:03 crc kubenswrapper[4861]: I0219 14:44:03.602813 4861 scope.go:117] "RemoveContainer" containerID="7758324d50f17937a95fa5dd794d47d161d0b3c5bba9c0ca7a12e73f10c62c1c" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.256811 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" event={"ID":"3574166a-39e5-4b93-bdfc-ecef1a067f5c","Type":"ContainerStarted","Data":"75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355"} Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.278490 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" podStartSLOduration=3.278475431 podStartE2EDuration="3.278475431s" podCreationTimestamp="2026-02-19 14:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:04.276364334 +0000 UTC m=+5658.937467562" watchObservedRunningTime="2026-02-19 14:44:04.278475431 +0000 UTC m=+5658.939578659" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.285530 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-tz57m" podStartSLOduration=3.285513889 podStartE2EDuration="3.285513889s" podCreationTimestamp="2026-02-19 14:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:03.396743067 +0000 UTC m=+5658.057846285" watchObservedRunningTime="2026-02-19 14:44:04.285513889 +0000 UTC m=+5658.946617117" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.390827 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-57f9db5886-8ptzp"] Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.392362 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.397952 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.404631 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57f9db5886-8ptzp"] Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.482381 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzp9c\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-kube-api-access-gzp9c\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.482448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-combined-ca-bundle\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.482467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-etc-swift\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.482506 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-run-httpd\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.482522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-log-httpd\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.482579 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-config-data\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.591212 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-run-httpd\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.591267 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-log-httpd\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.591371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-config-data\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.591489 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzp9c\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-kube-api-access-gzp9c\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.591533 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-combined-ca-bundle\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.591559 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-etc-swift\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.593372 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-log-httpd\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.593586 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nnrlf" podUID="1a9ba32b-3d06-43dd-aa42-af6f400940d4" containerName="registry-server" probeResult="failure" output=< Feb 19 14:44:04 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:44:04 crc kubenswrapper[4861]: > Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.595231 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-run-httpd\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.600575 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-combined-ca-bundle\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.606783 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-etc-swift\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.607220 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-config-data\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.612078 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzp9c\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-kube-api-access-gzp9c\") pod \"swift-proxy-57f9db5886-8ptzp\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:04 crc kubenswrapper[4861]: I0219 14:44:04.727937 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:05 crc kubenswrapper[4861]: I0219 14:44:05.263363 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:05 crc kubenswrapper[4861]: I0219 14:44:05.378168 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57f9db5886-8ptzp"] Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.271371 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f9db5886-8ptzp" event={"ID":"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96","Type":"ContainerStarted","Data":"318c109f1aa79a7b410e01eb4bc480ab20823839977b5c2a3e148af50768d2bd"} Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.271684 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f9db5886-8ptzp" event={"ID":"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96","Type":"ContainerStarted","Data":"d8c0864cd521b62c3818652836fa953130ca0802099770bdf3a579aa899900c4"} Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.271697 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f9db5886-8ptzp" event={"ID":"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96","Type":"ContainerStarted","Data":"11d93d22c5fb1f392c8e0045dc5a179899554fe165cfade0de35deb1eeb65e2a"} Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.300392 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-57f9db5886-8ptzp" podStartSLOduration=2.300371728 podStartE2EDuration="2.300371728s" podCreationTimestamp="2026-02-19 14:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:06.293926455 +0000 UTC m=+5660.955029703" watchObservedRunningTime="2026-02-19 14:44:06.300371728 +0000 UTC m=+5660.961474956" Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.951557 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d4ffc8498-zcrlh"] Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.953560 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.957174 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.957271 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 14:44:06 crc kubenswrapper[4861]: I0219 14:44:06.967615 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d4ffc8498-zcrlh"] Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.130386 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-config-data\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.130579 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-combined-ca-bundle\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.130756 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73ba1cca-8934-4068-8a44-00dc7b5a3726-etc-swift\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.131349 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-public-tls-certs\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.131465 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ba1cca-8934-4068-8a44-00dc7b5a3726-run-httpd\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.131554 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfr2\" (UniqueName: \"kubernetes.io/projected/73ba1cca-8934-4068-8a44-00dc7b5a3726-kube-api-access-8cfr2\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.131577 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ba1cca-8934-4068-8a44-00dc7b5a3726-log-httpd\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.131621 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-internal-tls-certs\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232442 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-config-data\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232506 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-combined-ca-bundle\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232560 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73ba1cca-8934-4068-8a44-00dc7b5a3726-etc-swift\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232584 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-public-tls-certs\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232611 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ba1cca-8934-4068-8a44-00dc7b5a3726-run-httpd\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232641 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ba1cca-8934-4068-8a44-00dc7b5a3726-log-httpd\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232659 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfr2\" (UniqueName: \"kubernetes.io/projected/73ba1cca-8934-4068-8a44-00dc7b5a3726-kube-api-access-8cfr2\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.232679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-internal-tls-certs\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.233404 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ba1cca-8934-4068-8a44-00dc7b5a3726-log-httpd\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.233465 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ba1cca-8934-4068-8a44-00dc7b5a3726-run-httpd\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.237883 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-public-tls-certs\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.238459 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-internal-tls-certs\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.239041 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-config-data\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.239627 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ba1cca-8934-4068-8a44-00dc7b5a3726-combined-ca-bundle\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.247705 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73ba1cca-8934-4068-8a44-00dc7b5a3726-etc-swift\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.256065 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfr2\" (UniqueName: \"kubernetes.io/projected/73ba1cca-8934-4068-8a44-00dc7b5a3726-kube-api-access-8cfr2\") pod \"swift-proxy-6d4ffc8498-zcrlh\" (UID: \"73ba1cca-8934-4068-8a44-00dc7b5a3726\") " pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.283033 4861 generic.go:334] "Generic (PLEG): container finished" podID="38973f38-cefa-4543-807f-da43a6a21e7b" containerID="1ce6c132e5a19d7923d4a5f10c6fdce302fb68dd096c93853c3efbcbccab43fb" exitCode=0 Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.283126 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tz57m" event={"ID":"38973f38-cefa-4543-807f-da43a6a21e7b","Type":"ContainerDied","Data":"1ce6c132e5a19d7923d4a5f10c6fdce302fb68dd096c93853c3efbcbccab43fb"} Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.284258 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.284432 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.284483 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:07 crc kubenswrapper[4861]: I0219 14:44:07.960060 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d4ffc8498-zcrlh"] Feb 19 14:44:07 crc kubenswrapper[4861]: W0219 14:44:07.967141 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ba1cca_8934_4068_8a44_00dc7b5a3726.slice/crio-b561f5399aa1321dc467693982e020ad5786fafde2db1d00b717fa4cb83aa1cc WatchSource:0}: Error finding container b561f5399aa1321dc467693982e020ad5786fafde2db1d00b717fa4cb83aa1cc: Status 404 returned error can't find the container with id b561f5399aa1321dc467693982e020ad5786fafde2db1d00b717fa4cb83aa1cc Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.309121 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" event={"ID":"73ba1cca-8934-4068-8a44-00dc7b5a3726","Type":"ContainerStarted","Data":"b561f5399aa1321dc467693982e020ad5786fafde2db1d00b717fa4cb83aa1cc"} Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.640701 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.767931 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-combined-ca-bundle\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.768376 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38973f38-cefa-4543-807f-da43a6a21e7b-etc-swift\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.768514 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5s9\" (UniqueName: \"kubernetes.io/projected/38973f38-cefa-4543-807f-da43a6a21e7b-kube-api-access-hf5s9\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.768560 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-ring-data-devices\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.768624 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-swiftconf\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.768779 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-dispersionconf\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.768832 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-scripts\") pod \"38973f38-cefa-4543-807f-da43a6a21e7b\" (UID: \"38973f38-cefa-4543-807f-da43a6a21e7b\") " Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.770290 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.771045 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38973f38-cefa-4543-807f-da43a6a21e7b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.775619 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38973f38-cefa-4543-807f-da43a6a21e7b-kube-api-access-hf5s9" (OuterVolumeSpecName: "kube-api-access-hf5s9") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "kube-api-access-hf5s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.783708 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.794092 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.794997 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-scripts" (OuterVolumeSpecName: "scripts") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.812016 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "38973f38-cefa-4543-807f-da43a6a21e7b" (UID: "38973f38-cefa-4543-807f-da43a6a21e7b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871747 4861 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871792 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871804 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871819 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38973f38-cefa-4543-807f-da43a6a21e7b-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871831 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5s9\" (UniqueName: \"kubernetes.io/projected/38973f38-cefa-4543-807f-da43a6a21e7b-kube-api-access-hf5s9\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871844 4861 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38973f38-cefa-4543-807f-da43a6a21e7b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:08 crc kubenswrapper[4861]: I0219 14:44:08.871855 4861 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38973f38-cefa-4543-807f-da43a6a21e7b-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.322832 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tz57m" event={"ID":"38973f38-cefa-4543-807f-da43a6a21e7b","Type":"ContainerDied","Data":"854344b79c32a161ecab836ef6a97bb2060b7e8818147572e0017a7b732d3aed"} Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.322885 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854344b79c32a161ecab836ef6a97bb2060b7e8818147572e0017a7b732d3aed" Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.322910 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tz57m" Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.330066 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" event={"ID":"73ba1cca-8934-4068-8a44-00dc7b5a3726","Type":"ContainerStarted","Data":"1aff42ec30b4652a7aa041d9555a575183005598ecad242b285d2169a4bcd3d4"} Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.330138 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" event={"ID":"73ba1cca-8934-4068-8a44-00dc7b5a3726","Type":"ContainerStarted","Data":"11d9a57770577de77130f929fc13cdd5fe82595407746d22433451f9b7cdfb53"} Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.331551 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.331605 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:09 crc kubenswrapper[4861]: I0219 14:44:09.369487 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" podStartSLOduration=3.369459306 podStartE2EDuration="3.369459306s" podCreationTimestamp="2026-02-19 14:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:09.358390479 +0000 UTC m=+5664.019493727" watchObservedRunningTime="2026-02-19 14:44:09.369459306 +0000 UTC m=+5664.030562564" Feb 19 14:44:11 crc kubenswrapper[4861]: I0219 14:44:11.906728 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:44:12 crc kubenswrapper[4861]: I0219 14:44:12.024406 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c87879445-g9jvc"] Feb 19 14:44:12 crc kubenswrapper[4861]: I0219 14:44:12.024857 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerName="dnsmasq-dns" containerID="cri-o://6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c" gracePeriod=10 Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.111358 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.255174 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn5fn\" (UniqueName: \"kubernetes.io/projected/5584a713-cb0e-4f6c-bb04-86231af4306e-kube-api-access-tn5fn\") pod \"5584a713-cb0e-4f6c-bb04-86231af4306e\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.255584 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-sb\") pod \"5584a713-cb0e-4f6c-bb04-86231af4306e\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.255661 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-config\") pod \"5584a713-cb0e-4f6c-bb04-86231af4306e\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.255765 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-nb\") pod \"5584a713-cb0e-4f6c-bb04-86231af4306e\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.255841 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-dns-svc\") pod \"5584a713-cb0e-4f6c-bb04-86231af4306e\" (UID: \"5584a713-cb0e-4f6c-bb04-86231af4306e\") " Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.267450 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5584a713-cb0e-4f6c-bb04-86231af4306e-kube-api-access-tn5fn" (OuterVolumeSpecName: "kube-api-access-tn5fn") pod "5584a713-cb0e-4f6c-bb04-86231af4306e" (UID: "5584a713-cb0e-4f6c-bb04-86231af4306e"). InnerVolumeSpecName "kube-api-access-tn5fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.315268 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5584a713-cb0e-4f6c-bb04-86231af4306e" (UID: "5584a713-cb0e-4f6c-bb04-86231af4306e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.318138 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-config" (OuterVolumeSpecName: "config") pod "5584a713-cb0e-4f6c-bb04-86231af4306e" (UID: "5584a713-cb0e-4f6c-bb04-86231af4306e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.330042 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5584a713-cb0e-4f6c-bb04-86231af4306e" (UID: "5584a713-cb0e-4f6c-bb04-86231af4306e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.333091 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5584a713-cb0e-4f6c-bb04-86231af4306e" (UID: "5584a713-cb0e-4f6c-bb04-86231af4306e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.358008 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.358046 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.358059 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn5fn\" (UniqueName: \"kubernetes.io/projected/5584a713-cb0e-4f6c-bb04-86231af4306e-kube-api-access-tn5fn\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.358073 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.358085 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5584a713-cb0e-4f6c-bb04-86231af4306e-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.372094 4861 generic.go:334] "Generic (PLEG): container finished" podID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerID="6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c" exitCode=0 Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.372138 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" event={"ID":"5584a713-cb0e-4f6c-bb04-86231af4306e","Type":"ContainerDied","Data":"6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c"} Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.372168 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" event={"ID":"5584a713-cb0e-4f6c-bb04-86231af4306e","Type":"ContainerDied","Data":"d874c6ccd32b74491b34883642cb3d60efd25224eca350f83e9ebd570080cbae"} Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.372191 4861 scope.go:117] "RemoveContainer" containerID="6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.372321 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c87879445-g9jvc" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.402159 4861 scope.go:117] "RemoveContainer" containerID="acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.418677 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c87879445-g9jvc"] Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.426791 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c87879445-g9jvc"] Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.444920 4861 scope.go:117] "RemoveContainer" containerID="6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c" Feb 19 14:44:13 crc kubenswrapper[4861]: E0219 14:44:13.445412 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c\": container with ID starting with 6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c not found: ID does not exist" containerID="6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.445478 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c"} err="failed to get container status \"6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c\": rpc error: code = NotFound desc = could not find container \"6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c\": container with ID starting with 6900fb320c0c1ccb08b35e7cc9e21533347b784c29738f096a6461d273809b6c not found: ID does not exist" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.445507 4861 scope.go:117] "RemoveContainer" containerID="acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72" Feb 19 14:44:13 crc kubenswrapper[4861]: E0219 14:44:13.445917 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72\": container with ID starting with acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72 not found: ID does not exist" containerID="acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.445941 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72"} err="failed to get container status \"acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72\": rpc error: code = NotFound desc = could not find container \"acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72\": container with ID starting with acb5a62917b9c01c641ec9d06285697e06afe46dc095c0d53168af838c6ffb72 not found: ID does not exist" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.623710 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.703702 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nnrlf" Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.826153 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nnrlf"] Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.910005 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkqzt"] Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.910579 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkqzt" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="registry-server" containerID="cri-o://0533eb6f5499487c0171cd677c8ec3bd5eb015e7e22adbd9d3c2f36b61836c1c" gracePeriod=2 Feb 19 14:44:13 crc kubenswrapper[4861]: I0219 14:44:13.994538 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" path="/var/lib/kubelet/pods/5584a713-cb0e-4f6c-bb04-86231af4306e/volumes" Feb 19 14:44:14 crc kubenswrapper[4861]: I0219 14:44:14.384516 4861 generic.go:334] "Generic (PLEG): container finished" podID="400ed650-9346-403d-a70f-27012222dc66" containerID="0533eb6f5499487c0171cd677c8ec3bd5eb015e7e22adbd9d3c2f36b61836c1c" exitCode=0 Feb 19 14:44:14 crc kubenswrapper[4861]: I0219 14:44:14.384607 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkqzt" event={"ID":"400ed650-9346-403d-a70f-27012222dc66","Type":"ContainerDied","Data":"0533eb6f5499487c0171cd677c8ec3bd5eb015e7e22adbd9d3c2f36b61836c1c"} Feb 19 14:44:14 crc kubenswrapper[4861]: I0219 14:44:14.751930 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:14 crc kubenswrapper[4861]: I0219 14:44:14.751987 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:14 crc kubenswrapper[4861]: I0219 14:44:14.985456 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.091963 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-utilities\") pod \"400ed650-9346-403d-a70f-27012222dc66\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.092008 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmnxb\" (UniqueName: \"kubernetes.io/projected/400ed650-9346-403d-a70f-27012222dc66-kube-api-access-zmnxb\") pod \"400ed650-9346-403d-a70f-27012222dc66\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.092133 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-catalog-content\") pod \"400ed650-9346-403d-a70f-27012222dc66\" (UID: \"400ed650-9346-403d-a70f-27012222dc66\") " Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.093562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-utilities" (OuterVolumeSpecName: "utilities") pod "400ed650-9346-403d-a70f-27012222dc66" (UID: "400ed650-9346-403d-a70f-27012222dc66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.098972 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400ed650-9346-403d-a70f-27012222dc66-kube-api-access-zmnxb" (OuterVolumeSpecName: "kube-api-access-zmnxb") pod "400ed650-9346-403d-a70f-27012222dc66" (UID: "400ed650-9346-403d-a70f-27012222dc66"). InnerVolumeSpecName "kube-api-access-zmnxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.147347 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "400ed650-9346-403d-a70f-27012222dc66" (UID: "400ed650-9346-403d-a70f-27012222dc66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.193856 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.193900 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmnxb\" (UniqueName: \"kubernetes.io/projected/400ed650-9346-403d-a70f-27012222dc66-kube-api-access-zmnxb\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.193912 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ed650-9346-403d-a70f-27012222dc66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.393884 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkqzt" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.393942 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkqzt" event={"ID":"400ed650-9346-403d-a70f-27012222dc66","Type":"ContainerDied","Data":"af537c87900f130d267ba143cf0a2a90d8f43d18caa81611bc62badc935c37d5"} Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.393986 4861 scope.go:117] "RemoveContainer" containerID="0533eb6f5499487c0171cd677c8ec3bd5eb015e7e22adbd9d3c2f36b61836c1c" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.420169 4861 scope.go:117] "RemoveContainer" containerID="03577f0d8c95aa010ad3f914c5e40ccd7a68d0ccb0232757d982daebf9d80b90" Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.430065 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkqzt"] Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.441472 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkqzt"] Feb 19 14:44:15 crc kubenswrapper[4861]: I0219 14:44:15.442386 4861 scope.go:117] "RemoveContainer" containerID="37973ffde48904e40fdf448303b0ae96ae33377d00d7056cbf8f5105e0df644b" Feb 19 14:44:16 crc kubenswrapper[4861]: I0219 14:44:16.017952 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400ed650-9346-403d-a70f-27012222dc66" path="/var/lib/kubelet/pods/400ed650-9346-403d-a70f-27012222dc66/volumes" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.092186 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tc2h2"] Feb 19 14:44:17 crc kubenswrapper[4861]: E0219 14:44:17.092855 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="registry-server" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.092877 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="registry-server" Feb 19 14:44:17 crc kubenswrapper[4861]: E0219 14:44:17.092900 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerName="dnsmasq-dns" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.092912 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerName="dnsmasq-dns" Feb 19 14:44:17 crc kubenswrapper[4861]: E0219 14:44:17.092946 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="extract-utilities" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.092959 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="extract-utilities" Feb 19 14:44:17 crc kubenswrapper[4861]: E0219 14:44:17.092993 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38973f38-cefa-4543-807f-da43a6a21e7b" containerName="swift-ring-rebalance" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.093005 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="38973f38-cefa-4543-807f-da43a6a21e7b" containerName="swift-ring-rebalance" Feb 19 14:44:17 crc kubenswrapper[4861]: E0219 14:44:17.093034 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerName="init" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.093046 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerName="init" Feb 19 14:44:17 crc kubenswrapper[4861]: E0219 14:44:17.093059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="extract-content" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.093071 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="extract-content" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.093345 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="400ed650-9346-403d-a70f-27012222dc66" containerName="registry-server" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.093388 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="38973f38-cefa-4543-807f-da43a6a21e7b" containerName="swift-ring-rebalance" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.093413 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5584a713-cb0e-4f6c-bb04-86231af4306e" containerName="dnsmasq-dns" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.096257 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.114064 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tc2h2"] Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.239165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-catalog-content\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.239231 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7fg\" (UniqueName: \"kubernetes.io/projected/64e8d05a-9bcf-43f4-9fac-7b909da2c431-kube-api-access-zv7fg\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.239403 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-utilities\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.290711 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.292797 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d4ffc8498-zcrlh" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.340784 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7fg\" (UniqueName: \"kubernetes.io/projected/64e8d05a-9bcf-43f4-9fac-7b909da2c431-kube-api-access-zv7fg\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.340879 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-utilities\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.341010 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-catalog-content\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.341535 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-catalog-content\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.341775 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-utilities\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.368501 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7fg\" (UniqueName: \"kubernetes.io/projected/64e8d05a-9bcf-43f4-9fac-7b909da2c431-kube-api-access-zv7fg\") pod \"community-operators-tc2h2\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.390516 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-57f9db5886-8ptzp"] Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.390776 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-57f9db5886-8ptzp" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-httpd" containerID="cri-o://d8c0864cd521b62c3818652836fa953130ca0802099770bdf3a579aa899900c4" gracePeriod=30 Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.390896 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-57f9db5886-8ptzp" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-server" containerID="cri-o://318c109f1aa79a7b410e01eb4bc480ab20823839977b5c2a3e148af50768d2bd" gracePeriod=30 Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.430938 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:17 crc kubenswrapper[4861]: I0219 14:44:17.807707 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tc2h2"] Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.428502 4861 generic.go:334] "Generic (PLEG): container finished" podID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerID="318c109f1aa79a7b410e01eb4bc480ab20823839977b5c2a3e148af50768d2bd" exitCode=0 Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.428752 4861 generic.go:334] "Generic (PLEG): container finished" podID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerID="d8c0864cd521b62c3818652836fa953130ca0802099770bdf3a579aa899900c4" exitCode=0 Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.428557 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f9db5886-8ptzp" event={"ID":"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96","Type":"ContainerDied","Data":"318c109f1aa79a7b410e01eb4bc480ab20823839977b5c2a3e148af50768d2bd"} Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.428825 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f9db5886-8ptzp" event={"ID":"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96","Type":"ContainerDied","Data":"d8c0864cd521b62c3818652836fa953130ca0802099770bdf3a579aa899900c4"} Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.437049 4861 generic.go:334] "Generic (PLEG): container finished" podID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerID="a04930d74b6c70debda497083d7b3fdaef00f42573a6776006ac9193c67230f0" exitCode=0 Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.437091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerDied","Data":"a04930d74b6c70debda497083d7b3fdaef00f42573a6776006ac9193c67230f0"} Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.437115 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerStarted","Data":"96fc89d057959d8e5c5b3b73c06d647d541b4a994f500c1d46eff83a8bacf1d7"} Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.438950 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.550497 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.666605 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-etc-swift\") pod \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.666826 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-log-httpd\") pod \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.666937 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzp9c\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-kube-api-access-gzp9c\") pod \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.666998 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-config-data\") pod \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.667030 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-run-httpd\") pod \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.667115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-combined-ca-bundle\") pod \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\" (UID: \"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96\") " Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.667373 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" (UID: "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.667607 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.667658 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" (UID: "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.681556 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" (UID: "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.687722 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-kube-api-access-gzp9c" (OuterVolumeSpecName: "kube-api-access-gzp9c") pod "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" (UID: "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96"). InnerVolumeSpecName "kube-api-access-gzp9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.718069 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" (UID: "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.731510 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-config-data" (OuterVolumeSpecName: "config-data") pod "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" (UID: "40b382c6-8de6-4f4c-8bb2-6f5cb54aee96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.769061 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzp9c\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-kube-api-access-gzp9c\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.769103 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.769114 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.769134 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:18 crc kubenswrapper[4861]: I0219 14:44:18.769143 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.451027 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f9db5886-8ptzp" Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.451145 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f9db5886-8ptzp" event={"ID":"40b382c6-8de6-4f4c-8bb2-6f5cb54aee96","Type":"ContainerDied","Data":"11d93d22c5fb1f392c8e0045dc5a179899554fe165cfade0de35deb1eeb65e2a"} Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.451684 4861 scope.go:117] "RemoveContainer" containerID="318c109f1aa79a7b410e01eb4bc480ab20823839977b5c2a3e148af50768d2bd" Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.457804 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerStarted","Data":"accdee46578cf84d1a22413a2517c1250dcb436efcfd89aa8e14fc77c9a58c8d"} Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.490557 4861 scope.go:117] "RemoveContainer" containerID="d8c0864cd521b62c3818652836fa953130ca0802099770bdf3a579aa899900c4" Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.518700 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-57f9db5886-8ptzp"] Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.528323 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-57f9db5886-8ptzp"] Feb 19 14:44:19 crc kubenswrapper[4861]: I0219 14:44:19.998392 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" path="/var/lib/kubelet/pods/40b382c6-8de6-4f4c-8bb2-6f5cb54aee96/volumes" Feb 19 14:44:20 crc kubenswrapper[4861]: I0219 14:44:20.475837 4861 generic.go:334] "Generic (PLEG): container finished" podID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerID="accdee46578cf84d1a22413a2517c1250dcb436efcfd89aa8e14fc77c9a58c8d" exitCode=0 Feb 19 14:44:20 crc kubenswrapper[4861]: I0219 14:44:20.475911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerDied","Data":"accdee46578cf84d1a22413a2517c1250dcb436efcfd89aa8e14fc77c9a58c8d"} Feb 19 14:44:21 crc kubenswrapper[4861]: I0219 14:44:21.488873 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerStarted","Data":"9a9b18f2e9b2cc8b6bc02079c3f112d9990fe7c96d3ae6637634cdb3b081a96a"} Feb 19 14:44:21 crc kubenswrapper[4861]: I0219 14:44:21.518750 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tc2h2" podStartSLOduration=2.083248132 podStartE2EDuration="4.518722369s" podCreationTimestamp="2026-02-19 14:44:17 +0000 UTC" firstStartedPulling="2026-02-19 14:44:18.438715398 +0000 UTC m=+5673.099818626" lastFinishedPulling="2026-02-19 14:44:20.874189595 +0000 UTC m=+5675.535292863" observedRunningTime="2026-02-19 14:44:21.514568647 +0000 UTC m=+5676.175671925" watchObservedRunningTime="2026-02-19 14:44:21.518722369 +0000 UTC m=+5676.179825637" Feb 19 14:44:27 crc kubenswrapper[4861]: I0219 14:44:27.431886 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:27 crc kubenswrapper[4861]: I0219 14:44:27.432945 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:27 crc kubenswrapper[4861]: I0219 14:44:27.497306 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:27 crc kubenswrapper[4861]: I0219 14:44:27.608814 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:27 crc kubenswrapper[4861]: I0219 14:44:27.736989 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tc2h2"] Feb 19 14:44:29 crc kubenswrapper[4861]: I0219 14:44:29.584849 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tc2h2" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="registry-server" containerID="cri-o://9a9b18f2e9b2cc8b6bc02079c3f112d9990fe7c96d3ae6637634cdb3b081a96a" gracePeriod=2 Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.598405 4861 generic.go:334] "Generic (PLEG): container finished" podID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerID="9a9b18f2e9b2cc8b6bc02079c3f112d9990fe7c96d3ae6637634cdb3b081a96a" exitCode=0 Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.598534 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerDied","Data":"9a9b18f2e9b2cc8b6bc02079c3f112d9990fe7c96d3ae6637634cdb3b081a96a"} Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.598839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc2h2" event={"ID":"64e8d05a-9bcf-43f4-9fac-7b909da2c431","Type":"ContainerDied","Data":"96fc89d057959d8e5c5b3b73c06d647d541b4a994f500c1d46eff83a8bacf1d7"} Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.598871 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96fc89d057959d8e5c5b3b73c06d647d541b4a994f500c1d46eff83a8bacf1d7" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.646129 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.707594 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv7fg\" (UniqueName: \"kubernetes.io/projected/64e8d05a-9bcf-43f4-9fac-7b909da2c431-kube-api-access-zv7fg\") pod \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.707711 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-catalog-content\") pod \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.707824 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-utilities\") pod \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\" (UID: \"64e8d05a-9bcf-43f4-9fac-7b909da2c431\") " Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.708972 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-utilities" (OuterVolumeSpecName: "utilities") pod "64e8d05a-9bcf-43f4-9fac-7b909da2c431" (UID: "64e8d05a-9bcf-43f4-9fac-7b909da2c431"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.716433 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e8d05a-9bcf-43f4-9fac-7b909da2c431-kube-api-access-zv7fg" (OuterVolumeSpecName: "kube-api-access-zv7fg") pod "64e8d05a-9bcf-43f4-9fac-7b909da2c431" (UID: "64e8d05a-9bcf-43f4-9fac-7b909da2c431"). InnerVolumeSpecName "kube-api-access-zv7fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.766605 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64e8d05a-9bcf-43f4-9fac-7b909da2c431" (UID: "64e8d05a-9bcf-43f4-9fac-7b909da2c431"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.810779 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv7fg\" (UniqueName: \"kubernetes.io/projected/64e8d05a-9bcf-43f4-9fac-7b909da2c431-kube-api-access-zv7fg\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.811001 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:30 crc kubenswrapper[4861]: I0219 14:44:30.811085 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e8d05a-9bcf-43f4-9fac-7b909da2c431-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:31 crc kubenswrapper[4861]: I0219 14:44:31.607238 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc2h2" Feb 19 14:44:31 crc kubenswrapper[4861]: I0219 14:44:31.662050 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tc2h2"] Feb 19 14:44:31 crc kubenswrapper[4861]: I0219 14:44:31.676416 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tc2h2"] Feb 19 14:44:31 crc kubenswrapper[4861]: I0219 14:44:31.991956 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" path="/var/lib/kubelet/pods/64e8d05a-9bcf-43f4-9fac-7b909da2c431/volumes" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.604749 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-242fk"] Feb 19 14:44:50 crc kubenswrapper[4861]: E0219 14:44:50.605756 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="registry-server" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.605776 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="registry-server" Feb 19 14:44:50 crc kubenswrapper[4861]: E0219 14:44:50.605788 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="extract-content" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.605795 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="extract-content" Feb 19 14:44:50 crc kubenswrapper[4861]: E0219 14:44:50.605822 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="extract-utilities" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.605830 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="extract-utilities" Feb 19 14:44:50 crc kubenswrapper[4861]: E0219 14:44:50.605845 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-httpd" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.605852 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-httpd" Feb 19 14:44:50 crc kubenswrapper[4861]: E0219 14:44:50.605866 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-server" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.605873 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-server" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.606049 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e8d05a-9bcf-43f4-9fac-7b909da2c431" containerName="registry-server" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.606073 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-server" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.606082 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b382c6-8de6-4f4c-8bb2-6f5cb54aee96" containerName="proxy-httpd" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.606778 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.621096 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-242fk"] Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.657619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5727b8cb-677d-413a-84f4-370e89e58665-operator-scripts\") pod \"cinder-db-create-242fk\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.657705 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvp6\" (UniqueName: \"kubernetes.io/projected/5727b8cb-677d-413a-84f4-370e89e58665-kube-api-access-mfvp6\") pod \"cinder-db-create-242fk\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.728011 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-23f0-account-create-update-46kcp"] Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.728990 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.731110 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.744841 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-23f0-account-create-update-46kcp"] Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.759662 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5727b8cb-677d-413a-84f4-370e89e58665-operator-scripts\") pod \"cinder-db-create-242fk\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.759716 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvp6\" (UniqueName: \"kubernetes.io/projected/5727b8cb-677d-413a-84f4-370e89e58665-kube-api-access-mfvp6\") pod \"cinder-db-create-242fk\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.760390 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5727b8cb-677d-413a-84f4-370e89e58665-operator-scripts\") pod \"cinder-db-create-242fk\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.787090 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvp6\" (UniqueName: \"kubernetes.io/projected/5727b8cb-677d-413a-84f4-370e89e58665-kube-api-access-mfvp6\") pod \"cinder-db-create-242fk\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.861583 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d209a-0de4-4605-a66d-d772d75bcee8-operator-scripts\") pod \"cinder-23f0-account-create-update-46kcp\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.861649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt6m4\" (UniqueName: \"kubernetes.io/projected/756d209a-0de4-4605-a66d-d772d75bcee8-kube-api-access-nt6m4\") pod \"cinder-23f0-account-create-update-46kcp\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.923988 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-242fk" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.963142 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d209a-0de4-4605-a66d-d772d75bcee8-operator-scripts\") pod \"cinder-23f0-account-create-update-46kcp\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.963211 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt6m4\" (UniqueName: \"kubernetes.io/projected/756d209a-0de4-4605-a66d-d772d75bcee8-kube-api-access-nt6m4\") pod \"cinder-23f0-account-create-update-46kcp\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.963927 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d209a-0de4-4605-a66d-d772d75bcee8-operator-scripts\") pod \"cinder-23f0-account-create-update-46kcp\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:50 crc kubenswrapper[4861]: I0219 14:44:50.986294 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt6m4\" (UniqueName: \"kubernetes.io/projected/756d209a-0de4-4605-a66d-d772d75bcee8-kube-api-access-nt6m4\") pod \"cinder-23f0-account-create-update-46kcp\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.041389 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.442387 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-242fk"] Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.556352 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-23f0-account-create-update-46kcp"] Feb 19 14:44:51 crc kubenswrapper[4861]: W0219 14:44:51.564298 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod756d209a_0de4_4605_a66d_d772d75bcee8.slice/crio-16a158b552a1d683b222aaa4fc6249e602d6ae6c211e35eb9dd87222fe455061 WatchSource:0}: Error finding container 16a158b552a1d683b222aaa4fc6249e602d6ae6c211e35eb9dd87222fe455061: Status 404 returned error can't find the container with id 16a158b552a1d683b222aaa4fc6249e602d6ae6c211e35eb9dd87222fe455061 Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.844997 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23f0-account-create-update-46kcp" event={"ID":"756d209a-0de4-4605-a66d-d772d75bcee8","Type":"ContainerStarted","Data":"9c6bdca29d06f28e535a56722429724b038d7b7a0110e36d8c533ed82314345e"} Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.845049 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23f0-account-create-update-46kcp" event={"ID":"756d209a-0de4-4605-a66d-d772d75bcee8","Type":"ContainerStarted","Data":"16a158b552a1d683b222aaa4fc6249e602d6ae6c211e35eb9dd87222fe455061"} Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.847464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-242fk" event={"ID":"5727b8cb-677d-413a-84f4-370e89e58665","Type":"ContainerStarted","Data":"7420bd8a4b2b558c95f92511194f5301c6efc3ee92ad89c472eaccd9d4c6cf96"} Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.847514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-242fk" event={"ID":"5727b8cb-677d-413a-84f4-370e89e58665","Type":"ContainerStarted","Data":"117b1ee1db9fad1a7089c04652c6b2228afdb16170c477a523580e59de1942db"} Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.869374 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-23f0-account-create-update-46kcp" podStartSLOduration=1.869354701 podStartE2EDuration="1.869354701s" podCreationTimestamp="2026-02-19 14:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:51.865885358 +0000 UTC m=+5706.526988586" watchObservedRunningTime="2026-02-19 14:44:51.869354701 +0000 UTC m=+5706.530457949" Feb 19 14:44:51 crc kubenswrapper[4861]: I0219 14:44:51.879557 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-242fk" podStartSLOduration=1.879530394 podStartE2EDuration="1.879530394s" podCreationTimestamp="2026-02-19 14:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:51.878480836 +0000 UTC m=+5706.539584074" watchObservedRunningTime="2026-02-19 14:44:51.879530394 +0000 UTC m=+5706.540633652" Feb 19 14:44:52 crc kubenswrapper[4861]: I0219 14:44:52.859977 4861 generic.go:334] "Generic (PLEG): container finished" podID="5727b8cb-677d-413a-84f4-370e89e58665" containerID="7420bd8a4b2b558c95f92511194f5301c6efc3ee92ad89c472eaccd9d4c6cf96" exitCode=0 Feb 19 14:44:52 crc kubenswrapper[4861]: I0219 14:44:52.860048 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-242fk" event={"ID":"5727b8cb-677d-413a-84f4-370e89e58665","Type":"ContainerDied","Data":"7420bd8a4b2b558c95f92511194f5301c6efc3ee92ad89c472eaccd9d4c6cf96"} Feb 19 14:44:52 crc kubenswrapper[4861]: I0219 14:44:52.862878 4861 generic.go:334] "Generic (PLEG): container finished" podID="756d209a-0de4-4605-a66d-d772d75bcee8" containerID="9c6bdca29d06f28e535a56722429724b038d7b7a0110e36d8c533ed82314345e" exitCode=0 Feb 19 14:44:52 crc kubenswrapper[4861]: I0219 14:44:52.862942 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23f0-account-create-update-46kcp" event={"ID":"756d209a-0de4-4605-a66d-d772d75bcee8","Type":"ContainerDied","Data":"9c6bdca29d06f28e535a56722429724b038d7b7a0110e36d8c533ed82314345e"} Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.269395 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-242fk" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.270111 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.329092 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d209a-0de4-4605-a66d-d772d75bcee8-operator-scripts\") pod \"756d209a-0de4-4605-a66d-d772d75bcee8\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.329191 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvp6\" (UniqueName: \"kubernetes.io/projected/5727b8cb-677d-413a-84f4-370e89e58665-kube-api-access-mfvp6\") pod \"5727b8cb-677d-413a-84f4-370e89e58665\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.329280 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5727b8cb-677d-413a-84f4-370e89e58665-operator-scripts\") pod \"5727b8cb-677d-413a-84f4-370e89e58665\" (UID: \"5727b8cb-677d-413a-84f4-370e89e58665\") " Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.329319 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt6m4\" (UniqueName: \"kubernetes.io/projected/756d209a-0de4-4605-a66d-d772d75bcee8-kube-api-access-nt6m4\") pod \"756d209a-0de4-4605-a66d-d772d75bcee8\" (UID: \"756d209a-0de4-4605-a66d-d772d75bcee8\") " Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.330028 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756d209a-0de4-4605-a66d-d772d75bcee8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "756d209a-0de4-4605-a66d-d772d75bcee8" (UID: "756d209a-0de4-4605-a66d-d772d75bcee8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.330204 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5727b8cb-677d-413a-84f4-370e89e58665-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5727b8cb-677d-413a-84f4-370e89e58665" (UID: "5727b8cb-677d-413a-84f4-370e89e58665"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.337643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5727b8cb-677d-413a-84f4-370e89e58665-kube-api-access-mfvp6" (OuterVolumeSpecName: "kube-api-access-mfvp6") pod "5727b8cb-677d-413a-84f4-370e89e58665" (UID: "5727b8cb-677d-413a-84f4-370e89e58665"). InnerVolumeSpecName "kube-api-access-mfvp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.346121 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756d209a-0de4-4605-a66d-d772d75bcee8-kube-api-access-nt6m4" (OuterVolumeSpecName: "kube-api-access-nt6m4") pod "756d209a-0de4-4605-a66d-d772d75bcee8" (UID: "756d209a-0de4-4605-a66d-d772d75bcee8"). InnerVolumeSpecName "kube-api-access-nt6m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.431818 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt6m4\" (UniqueName: \"kubernetes.io/projected/756d209a-0de4-4605-a66d-d772d75bcee8-kube-api-access-nt6m4\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.432397 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d209a-0de4-4605-a66d-d772d75bcee8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.432415 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfvp6\" (UniqueName: \"kubernetes.io/projected/5727b8cb-677d-413a-84f4-370e89e58665-kube-api-access-mfvp6\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.432449 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5727b8cb-677d-413a-84f4-370e89e58665-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.905020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-242fk" event={"ID":"5727b8cb-677d-413a-84f4-370e89e58665","Type":"ContainerDied","Data":"117b1ee1db9fad1a7089c04652c6b2228afdb16170c477a523580e59de1942db"} Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.905593 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117b1ee1db9fad1a7089c04652c6b2228afdb16170c477a523580e59de1942db" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.905772 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-242fk" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.910060 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23f0-account-create-update-46kcp" event={"ID":"756d209a-0de4-4605-a66d-d772d75bcee8","Type":"ContainerDied","Data":"16a158b552a1d683b222aaa4fc6249e602d6ae6c211e35eb9dd87222fe455061"} Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.910124 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16a158b552a1d683b222aaa4fc6249e602d6ae6c211e35eb9dd87222fe455061" Feb 19 14:44:54 crc kubenswrapper[4861]: I0219 14:44:54.910199 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23f0-account-create-update-46kcp" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.014391 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vckzg"] Feb 19 14:44:56 crc kubenswrapper[4861]: E0219 14:44:56.014896 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756d209a-0de4-4605-a66d-d772d75bcee8" containerName="mariadb-account-create-update" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.014915 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="756d209a-0de4-4605-a66d-d772d75bcee8" containerName="mariadb-account-create-update" Feb 19 14:44:56 crc kubenswrapper[4861]: E0219 14:44:56.014937 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5727b8cb-677d-413a-84f4-370e89e58665" containerName="mariadb-database-create" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.014946 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5727b8cb-677d-413a-84f4-370e89e58665" containerName="mariadb-database-create" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.015163 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="756d209a-0de4-4605-a66d-d772d75bcee8" containerName="mariadb-account-create-update" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.015184 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5727b8cb-677d-413a-84f4-370e89e58665" containerName="mariadb-database-create" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.016019 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.017291 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zbl8z" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.018817 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.019268 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.034148 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vckzg"] Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.163285 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xfz\" (UniqueName: \"kubernetes.io/projected/2117b6a7-23dd-4679-8860-ff0545229385-kube-api-access-92xfz\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.163551 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-scripts\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.163619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-db-sync-config-data\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.163641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2117b6a7-23dd-4679-8860-ff0545229385-etc-machine-id\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.163654 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-config-data\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.163711 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-combined-ca-bundle\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-combined-ca-bundle\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264676 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xfz\" (UniqueName: \"kubernetes.io/projected/2117b6a7-23dd-4679-8860-ff0545229385-kube-api-access-92xfz\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264715 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-scripts\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-db-sync-config-data\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264806 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-config-data\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264823 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2117b6a7-23dd-4679-8860-ff0545229385-etc-machine-id\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.264894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2117b6a7-23dd-4679-8860-ff0545229385-etc-machine-id\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.271550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-config-data\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.271984 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-combined-ca-bundle\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.272018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-scripts\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.283413 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-db-sync-config-data\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.288012 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xfz\" (UniqueName: \"kubernetes.io/projected/2117b6a7-23dd-4679-8860-ff0545229385-kube-api-access-92xfz\") pod \"cinder-db-sync-vckzg\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.336480 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vckzg" Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.615996 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vckzg"] Feb 19 14:44:56 crc kubenswrapper[4861]: W0219 14:44:56.623794 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2117b6a7_23dd_4679_8860_ff0545229385.slice/crio-a2ab18f008333f62f92183872adab52cf74377911adc8cc54308207f46d43225 WatchSource:0}: Error finding container a2ab18f008333f62f92183872adab52cf74377911adc8cc54308207f46d43225: Status 404 returned error can't find the container with id a2ab18f008333f62f92183872adab52cf74377911adc8cc54308207f46d43225 Feb 19 14:44:56 crc kubenswrapper[4861]: I0219 14:44:56.932173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vckzg" event={"ID":"2117b6a7-23dd-4679-8860-ff0545229385","Type":"ContainerStarted","Data":"a2ab18f008333f62f92183872adab52cf74377911adc8cc54308207f46d43225"} Feb 19 14:44:57 crc kubenswrapper[4861]: I0219 14:44:57.944412 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vckzg" event={"ID":"2117b6a7-23dd-4679-8860-ff0545229385","Type":"ContainerStarted","Data":"312efb867dfa595de92c5bb6ec280afe469def8361e0ce5a48029d7e5a432278"} Feb 19 14:44:57 crc kubenswrapper[4861]: I0219 14:44:57.967930 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vckzg" podStartSLOduration=2.967911804 podStartE2EDuration="2.967911804s" podCreationTimestamp="2026-02-19 14:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:44:57.961722968 +0000 UTC m=+5712.622826196" watchObservedRunningTime="2026-02-19 14:44:57.967911804 +0000 UTC m=+5712.629015032" Feb 19 14:44:59 crc kubenswrapper[4861]: I0219 14:44:59.968134 4861 generic.go:334] "Generic (PLEG): container finished" podID="2117b6a7-23dd-4679-8860-ff0545229385" containerID="312efb867dfa595de92c5bb6ec280afe469def8361e0ce5a48029d7e5a432278" exitCode=0 Feb 19 14:44:59 crc kubenswrapper[4861]: I0219 14:44:59.968290 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vckzg" event={"ID":"2117b6a7-23dd-4679-8860-ff0545229385","Type":"ContainerDied","Data":"312efb867dfa595de92c5bb6ec280afe469def8361e0ce5a48029d7e5a432278"} Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.143559 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9"] Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.144984 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.147692 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.148006 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.156558 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9"] Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.251110 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bd39f2-e28e-475c-bc50-c430217d93a6-config-volume\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.251185 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhlt\" (UniqueName: \"kubernetes.io/projected/a1bd39f2-e28e-475c-bc50-c430217d93a6-kube-api-access-fwhlt\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.251302 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bd39f2-e28e-475c-bc50-c430217d93a6-secret-volume\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.352965 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bd39f2-e28e-475c-bc50-c430217d93a6-secret-volume\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.353070 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bd39f2-e28e-475c-bc50-c430217d93a6-config-volume\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.353134 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhlt\" (UniqueName: \"kubernetes.io/projected/a1bd39f2-e28e-475c-bc50-c430217d93a6-kube-api-access-fwhlt\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.355026 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bd39f2-e28e-475c-bc50-c430217d93a6-config-volume\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.360361 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bd39f2-e28e-475c-bc50-c430217d93a6-secret-volume\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.375111 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhlt\" (UniqueName: \"kubernetes.io/projected/a1bd39f2-e28e-475c-bc50-c430217d93a6-kube-api-access-fwhlt\") pod \"collect-profiles-29525205-g8jq9\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:00 crc kubenswrapper[4861]: I0219 14:45:00.472831 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.143841 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9"] Feb 19 14:45:01 crc kubenswrapper[4861]: W0219 14:45:01.150319 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1bd39f2_e28e_475c_bc50_c430217d93a6.slice/crio-b2d02486134d08e55a5584d09b3ace496d3ee7271404861e6126e2c135aae73c WatchSource:0}: Error finding container b2d02486134d08e55a5584d09b3ace496d3ee7271404861e6126e2c135aae73c: Status 404 returned error can't find the container with id b2d02486134d08e55a5584d09b3ace496d3ee7271404861e6126e2c135aae73c Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.535992 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vckzg" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722235 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-config-data\") pod \"2117b6a7-23dd-4679-8860-ff0545229385\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92xfz\" (UniqueName: \"kubernetes.io/projected/2117b6a7-23dd-4679-8860-ff0545229385-kube-api-access-92xfz\") pod \"2117b6a7-23dd-4679-8860-ff0545229385\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722364 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-combined-ca-bundle\") pod \"2117b6a7-23dd-4679-8860-ff0545229385\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2117b6a7-23dd-4679-8860-ff0545229385-etc-machine-id\") pod \"2117b6a7-23dd-4679-8860-ff0545229385\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722563 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-scripts\") pod \"2117b6a7-23dd-4679-8860-ff0545229385\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722659 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-db-sync-config-data\") pod \"2117b6a7-23dd-4679-8860-ff0545229385\" (UID: \"2117b6a7-23dd-4679-8860-ff0545229385\") " Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.722888 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2117b6a7-23dd-4679-8860-ff0545229385-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2117b6a7-23dd-4679-8860-ff0545229385" (UID: "2117b6a7-23dd-4679-8860-ff0545229385"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.723579 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2117b6a7-23dd-4679-8860-ff0545229385-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.727585 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2117b6a7-23dd-4679-8860-ff0545229385" (UID: "2117b6a7-23dd-4679-8860-ff0545229385"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.727658 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-scripts" (OuterVolumeSpecName: "scripts") pod "2117b6a7-23dd-4679-8860-ff0545229385" (UID: "2117b6a7-23dd-4679-8860-ff0545229385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.728326 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2117b6a7-23dd-4679-8860-ff0545229385-kube-api-access-92xfz" (OuterVolumeSpecName: "kube-api-access-92xfz") pod "2117b6a7-23dd-4679-8860-ff0545229385" (UID: "2117b6a7-23dd-4679-8860-ff0545229385"). InnerVolumeSpecName "kube-api-access-92xfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.749223 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2117b6a7-23dd-4679-8860-ff0545229385" (UID: "2117b6a7-23dd-4679-8860-ff0545229385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.794514 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-config-data" (OuterVolumeSpecName: "config-data") pod "2117b6a7-23dd-4679-8860-ff0545229385" (UID: "2117b6a7-23dd-4679-8860-ff0545229385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.827737 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.827795 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.827815 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92xfz\" (UniqueName: \"kubernetes.io/projected/2117b6a7-23dd-4679-8860-ff0545229385-kube-api-access-92xfz\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.827833 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:01 crc kubenswrapper[4861]: I0219 14:45:01.827851 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2117b6a7-23dd-4679-8860-ff0545229385-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.132603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vckzg" event={"ID":"2117b6a7-23dd-4679-8860-ff0545229385","Type":"ContainerDied","Data":"a2ab18f008333f62f92183872adab52cf74377911adc8cc54308207f46d43225"} Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.132663 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ab18f008333f62f92183872adab52cf74377911adc8cc54308207f46d43225" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.132734 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vckzg" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.136413 4861 generic.go:334] "Generic (PLEG): container finished" podID="a1bd39f2-e28e-475c-bc50-c430217d93a6" containerID="a1ab56604003f58281ece631e51ce10f2928728c254b8745236182358b3d1e2c" exitCode=0 Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.136475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" event={"ID":"a1bd39f2-e28e-475c-bc50-c430217d93a6","Type":"ContainerDied","Data":"a1ab56604003f58281ece631e51ce10f2928728c254b8745236182358b3d1e2c"} Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.136500 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" event={"ID":"a1bd39f2-e28e-475c-bc50-c430217d93a6","Type":"ContainerStarted","Data":"b2d02486134d08e55a5584d09b3ace496d3ee7271404861e6126e2c135aae73c"} Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.349009 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c6dfd666f-qdfhh"] Feb 19 14:45:02 crc kubenswrapper[4861]: E0219 14:45:02.349361 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2117b6a7-23dd-4679-8860-ff0545229385" containerName="cinder-db-sync" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.349377 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2117b6a7-23dd-4679-8860-ff0545229385" containerName="cinder-db-sync" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.349582 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2117b6a7-23dd-4679-8860-ff0545229385" containerName="cinder-db-sync" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.350510 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.371934 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c6dfd666f-qdfhh"] Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.514815 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.516156 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.518370 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zbl8z" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.518412 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.526811 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.526904 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.534274 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.542932 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvg5\" (UniqueName: \"kubernetes.io/projected/29d3d5f0-231b-4430-856f-569740c64481-kube-api-access-sqvg5\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.546221 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-dns-svc\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.546596 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-sb\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.546682 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-config\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.546812 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-nb\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.648114 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.648501 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7g7w\" (UniqueName: \"kubernetes.io/projected/49507243-f515-44b4-98bf-400a41550f93-kube-api-access-r7g7w\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.648614 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-sb\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.648737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-config\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.648859 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.648946 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-nb\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649071 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49507243-f515-44b4-98bf-400a41550f93-etc-machine-id\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649187 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data-custom\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvg5\" (UniqueName: \"kubernetes.io/projected/29d3d5f0-231b-4430-856f-569740c64481-kube-api-access-sqvg5\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-scripts\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649529 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-dns-svc\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649622 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49507243-f515-44b4-98bf-400a41550f93-logs\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649642 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-sb\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649655 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-config\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.649850 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-nb\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.650362 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-dns-svc\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.675362 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvg5\" (UniqueName: \"kubernetes.io/projected/29d3d5f0-231b-4430-856f-569740c64481-kube-api-access-sqvg5\") pod \"dnsmasq-dns-c6dfd666f-qdfhh\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.681823 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.750987 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7g7w\" (UniqueName: \"kubernetes.io/projected/49507243-f515-44b4-98bf-400a41550f93-kube-api-access-r7g7w\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.751700 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.752400 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49507243-f515-44b4-98bf-400a41550f93-etc-machine-id\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.752447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data-custom\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.752479 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-scripts\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.752495 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49507243-f515-44b4-98bf-400a41550f93-logs\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.752536 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.754192 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49507243-f515-44b4-98bf-400a41550f93-etc-machine-id\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.754792 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49507243-f515-44b4-98bf-400a41550f93-logs\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.755271 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.756831 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.756894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data-custom\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.758782 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-scripts\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.773049 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7g7w\" (UniqueName: \"kubernetes.io/projected/49507243-f515-44b4-98bf-400a41550f93-kube-api-access-r7g7w\") pod \"cinder-api-0\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " pod="openstack/cinder-api-0" Feb 19 14:45:02 crc kubenswrapper[4861]: I0219 14:45:02.836673 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.079615 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c6dfd666f-qdfhh"] Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.153709 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" event={"ID":"29d3d5f0-231b-4430-856f-569740c64481","Type":"ContainerStarted","Data":"805229783d986ed375d04f13ef9113c42097c0d7a1e07789d1f20e6fe42f379b"} Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.457565 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.468471 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.581480 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhlt\" (UniqueName: \"kubernetes.io/projected/a1bd39f2-e28e-475c-bc50-c430217d93a6-kube-api-access-fwhlt\") pod \"a1bd39f2-e28e-475c-bc50-c430217d93a6\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.581832 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bd39f2-e28e-475c-bc50-c430217d93a6-secret-volume\") pod \"a1bd39f2-e28e-475c-bc50-c430217d93a6\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.581859 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bd39f2-e28e-475c-bc50-c430217d93a6-config-volume\") pod \"a1bd39f2-e28e-475c-bc50-c430217d93a6\" (UID: \"a1bd39f2-e28e-475c-bc50-c430217d93a6\") " Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.583021 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1bd39f2-e28e-475c-bc50-c430217d93a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "a1bd39f2-e28e-475c-bc50-c430217d93a6" (UID: "a1bd39f2-e28e-475c-bc50-c430217d93a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.593277 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd39f2-e28e-475c-bc50-c430217d93a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a1bd39f2-e28e-475c-bc50-c430217d93a6" (UID: "a1bd39f2-e28e-475c-bc50-c430217d93a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.593302 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bd39f2-e28e-475c-bc50-c430217d93a6-kube-api-access-fwhlt" (OuterVolumeSpecName: "kube-api-access-fwhlt") pod "a1bd39f2-e28e-475c-bc50-c430217d93a6" (UID: "a1bd39f2-e28e-475c-bc50-c430217d93a6"). InnerVolumeSpecName "kube-api-access-fwhlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.660775 4861 scope.go:117] "RemoveContainer" containerID="184a4301dc1907adda0921922c6c175f36c0e99f2eeab3495ea8f267bb443cbb" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.684041 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhlt\" (UniqueName: \"kubernetes.io/projected/a1bd39f2-e28e-475c-bc50-c430217d93a6-kube-api-access-fwhlt\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.684089 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bd39f2-e28e-475c-bc50-c430217d93a6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:03 crc kubenswrapper[4861]: I0219 14:45:03.684103 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bd39f2-e28e-475c-bc50-c430217d93a6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.162548 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"49507243-f515-44b4-98bf-400a41550f93","Type":"ContainerStarted","Data":"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46"} Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.162586 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"49507243-f515-44b4-98bf-400a41550f93","Type":"ContainerStarted","Data":"557d00680d4b07408b8870c0ff6557fe0583b439aabe4900189e10de1fe70c1d"} Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.163878 4861 generic.go:334] "Generic (PLEG): container finished" podID="29d3d5f0-231b-4430-856f-569740c64481" containerID="5acb5779a78067664b8cd17e1eb481e23907da7df95ca405e248cdfd7de9aa45" exitCode=0 Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.163915 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" event={"ID":"29d3d5f0-231b-4430-856f-569740c64481","Type":"ContainerDied","Data":"5acb5779a78067664b8cd17e1eb481e23907da7df95ca405e248cdfd7de9aa45"} Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.168184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" event={"ID":"a1bd39f2-e28e-475c-bc50-c430217d93a6","Type":"ContainerDied","Data":"b2d02486134d08e55a5584d09b3ace496d3ee7271404861e6126e2c135aae73c"} Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.168302 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d02486134d08e55a5584d09b3ace496d3ee7271404861e6126e2c135aae73c" Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.168264 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9" Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.568542 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp"] Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.574625 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-b8qcp"] Feb 19 14:45:04 crc kubenswrapper[4861]: I0219 14:45:04.747593 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.185703 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"49507243-f515-44b4-98bf-400a41550f93","Type":"ContainerStarted","Data":"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d"} Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.207478 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" event={"ID":"29d3d5f0-231b-4430-856f-569740c64481","Type":"ContainerStarted","Data":"e1e0fe80cdcfa0c0b4daf679883d67cc4ce186f1a250fb2626530e525320409b"} Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.207549 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.207566 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.255233 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" podStartSLOduration=3.255031169 podStartE2EDuration="3.255031169s" podCreationTimestamp="2026-02-19 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:45:05.250661292 +0000 UTC m=+5719.911764550" watchObservedRunningTime="2026-02-19 14:45:05.255031169 +0000 UTC m=+5719.916134447" Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.260667 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.260640519 podStartE2EDuration="3.260640519s" podCreationTimestamp="2026-02-19 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:45:05.21850598 +0000 UTC m=+5719.879609238" watchObservedRunningTime="2026-02-19 14:45:05.260640519 +0000 UTC m=+5719.921743757" Feb 19 14:45:05 crc kubenswrapper[4861]: I0219 14:45:05.997080 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc07f64-3837-42e3-a7cf-1a99295110d6" path="/var/lib/kubelet/pods/5dc07f64-3837-42e3-a7cf-1a99295110d6/volumes" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.205286 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api-log" containerID="cri-o://07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46" gracePeriod=30 Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.206025 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api" containerID="cri-o://97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d" gracePeriod=30 Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.798004 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947116 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-combined-ca-bundle\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947223 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-scripts\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947275 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7g7w\" (UniqueName: \"kubernetes.io/projected/49507243-f515-44b4-98bf-400a41550f93-kube-api-access-r7g7w\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947349 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data-custom\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947477 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49507243-f515-44b4-98bf-400a41550f93-etc-machine-id\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947635 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49507243-f515-44b4-98bf-400a41550f93-logs\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947671 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data\") pod \"49507243-f515-44b4-98bf-400a41550f93\" (UID: \"49507243-f515-44b4-98bf-400a41550f93\") " Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.947852 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49507243-f515-44b4-98bf-400a41550f93-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.948201 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49507243-f515-44b4-98bf-400a41550f93-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.948505 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49507243-f515-44b4-98bf-400a41550f93-logs" (OuterVolumeSpecName: "logs") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.952863 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49507243-f515-44b4-98bf-400a41550f93-kube-api-access-r7g7w" (OuterVolumeSpecName: "kube-api-access-r7g7w") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "kube-api-access-r7g7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.953357 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-scripts" (OuterVolumeSpecName: "scripts") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.954111 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.992329 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:06 crc kubenswrapper[4861]: I0219 14:45:06.995257 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data" (OuterVolumeSpecName: "config-data") pod "49507243-f515-44b4-98bf-400a41550f93" (UID: "49507243-f515-44b4-98bf-400a41550f93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.049819 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49507243-f515-44b4-98bf-400a41550f93-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.049862 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.049875 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.049889 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.049900 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7g7w\" (UniqueName: \"kubernetes.io/projected/49507243-f515-44b4-98bf-400a41550f93-kube-api-access-r7g7w\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.049911 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49507243-f515-44b4-98bf-400a41550f93-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.211960 4861 generic.go:334] "Generic (PLEG): container finished" podID="49507243-f515-44b4-98bf-400a41550f93" containerID="97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d" exitCode=0 Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.211999 4861 generic.go:334] "Generic (PLEG): container finished" podID="49507243-f515-44b4-98bf-400a41550f93" containerID="07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46" exitCode=143 Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.212023 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.212031 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"49507243-f515-44b4-98bf-400a41550f93","Type":"ContainerDied","Data":"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d"} Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.212083 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"49507243-f515-44b4-98bf-400a41550f93","Type":"ContainerDied","Data":"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46"} Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.212107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"49507243-f515-44b4-98bf-400a41550f93","Type":"ContainerDied","Data":"557d00680d4b07408b8870c0ff6557fe0583b439aabe4900189e10de1fe70c1d"} Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.212131 4861 scope.go:117] "RemoveContainer" containerID="97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.246046 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.251114 4861 scope.go:117] "RemoveContainer" containerID="07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.257736 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.268560 4861 scope.go:117] "RemoveContainer" containerID="97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d" Feb 19 14:45:07 crc kubenswrapper[4861]: E0219 14:45:07.269093 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d\": container with ID starting with 97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d not found: ID does not exist" containerID="97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.269164 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d"} err="failed to get container status \"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d\": rpc error: code = NotFound desc = could not find container \"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d\": container with ID starting with 97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d not found: ID does not exist" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.269205 4861 scope.go:117] "RemoveContainer" containerID="07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46" Feb 19 14:45:07 crc kubenswrapper[4861]: E0219 14:45:07.269518 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46\": container with ID starting with 07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46 not found: ID does not exist" containerID="07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.269600 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46"} err="failed to get container status \"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46\": rpc error: code = NotFound desc = could not find container \"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46\": container with ID starting with 07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46 not found: ID does not exist" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.269682 4861 scope.go:117] "RemoveContainer" containerID="97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.271237 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d"} err="failed to get container status \"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d\": rpc error: code = NotFound desc = could not find container \"97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d\": container with ID starting with 97bbd40de59623bded6ac29b2aad1ccf26b943d299d7976c99f439c422149c5d not found: ID does not exist" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.271282 4861 scope.go:117] "RemoveContainer" containerID="07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.272618 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46"} err="failed to get container status \"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46\": rpc error: code = NotFound desc = could not find container \"07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46\": container with ID starting with 07d44ec89610e1f9437a16fa21c36c54bffc7d7613232324bdbfcc09607f5f46 not found: ID does not exist" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.288645 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:07 crc kubenswrapper[4861]: E0219 14:45:07.289059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api-log" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.289081 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api-log" Feb 19 14:45:07 crc kubenswrapper[4861]: E0219 14:45:07.289107 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.289115 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api" Feb 19 14:45:07 crc kubenswrapper[4861]: E0219 14:45:07.289130 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bd39f2-e28e-475c-bc50-c430217d93a6" containerName="collect-profiles" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.289138 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bd39f2-e28e-475c-bc50-c430217d93a6" containerName="collect-profiles" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.289334 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.289356 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bd39f2-e28e-475c-bc50-c430217d93a6" containerName="collect-profiles" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.289376 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="49507243-f515-44b4-98bf-400a41550f93" containerName="cinder-api-log" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.290506 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.294310 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.294666 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.295086 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.298867 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zbl8z" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.299111 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.299153 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.308854 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.457706 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-logs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458118 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-scripts\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458181 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn7h4\" (UniqueName: \"kubernetes.io/projected/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-kube-api-access-tn7h4\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458240 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458274 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458631 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458723 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.458869 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data-custom\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560202 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-logs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-scripts\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560340 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn7h4\" (UniqueName: \"kubernetes.io/projected/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-kube-api-access-tn7h4\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560365 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560397 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560436 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data-custom\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.560918 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.561586 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-logs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.564530 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.564729 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.565954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.566029 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-scripts\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.567036 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data-custom\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.581510 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.586586 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn7h4\" (UniqueName: \"kubernetes.io/projected/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-kube-api-access-tn7h4\") pod \"cinder-api-0\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.609881 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:07 crc kubenswrapper[4861]: I0219 14:45:07.990342 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49507243-f515-44b4-98bf-400a41550f93" path="/var/lib/kubelet/pods/49507243-f515-44b4-98bf-400a41550f93/volumes" Feb 19 14:45:08 crc kubenswrapper[4861]: I0219 14:45:08.122483 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:08 crc kubenswrapper[4861]: I0219 14:45:08.237633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b","Type":"ContainerStarted","Data":"5bd507454200f4580baadb57a8ef3b9613442f3308de9faa97dc93cd7c362159"} Feb 19 14:45:09 crc kubenswrapper[4861]: I0219 14:45:09.251662 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b","Type":"ContainerStarted","Data":"e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819"} Feb 19 14:45:10 crc kubenswrapper[4861]: I0219 14:45:10.263315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b","Type":"ContainerStarted","Data":"5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f"} Feb 19 14:45:10 crc kubenswrapper[4861]: I0219 14:45:10.263796 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 14:45:10 crc kubenswrapper[4861]: I0219 14:45:10.294518 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.294392359 podStartE2EDuration="3.294392359s" podCreationTimestamp="2026-02-19 14:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:45:10.290482295 +0000 UTC m=+5724.951585623" watchObservedRunningTime="2026-02-19 14:45:10.294392359 +0000 UTC m=+5724.955495627" Feb 19 14:45:12 crc kubenswrapper[4861]: I0219 14:45:12.683770 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:45:12 crc kubenswrapper[4861]: I0219 14:45:12.772801 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78696b78bf-8w2t9"] Feb 19 14:45:12 crc kubenswrapper[4861]: I0219 14:45:12.773126 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerName="dnsmasq-dns" containerID="cri-o://75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355" gracePeriod=10 Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.283368 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.297785 4861 generic.go:334] "Generic (PLEG): container finished" podID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerID="75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355" exitCode=0 Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.297828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" event={"ID":"3574166a-39e5-4b93-bdfc-ecef1a067f5c","Type":"ContainerDied","Data":"75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355"} Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.297856 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" event={"ID":"3574166a-39e5-4b93-bdfc-ecef1a067f5c","Type":"ContainerDied","Data":"19e023803c2805f9ac89c1da6df24ab78a347af0d6a58b1ea63ef764fd5fd1c6"} Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.297873 4861 scope.go:117] "RemoveContainer" containerID="75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.297877 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78696b78bf-8w2t9" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.327937 4861 scope.go:117] "RemoveContainer" containerID="6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.350824 4861 scope.go:117] "RemoveContainer" containerID="75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355" Feb 19 14:45:13 crc kubenswrapper[4861]: E0219 14:45:13.357324 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355\": container with ID starting with 75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355 not found: ID does not exist" containerID="75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.357358 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355"} err="failed to get container status \"75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355\": rpc error: code = NotFound desc = could not find container \"75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355\": container with ID starting with 75ced1b5b504f0e678661279db984764c31435d214007f3dff1b38b737b2f355 not found: ID does not exist" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.357381 4861 scope.go:117] "RemoveContainer" containerID="6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6" Feb 19 14:45:13 crc kubenswrapper[4861]: E0219 14:45:13.357757 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6\": container with ID starting with 6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6 not found: ID does not exist" containerID="6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.357806 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6"} err="failed to get container status \"6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6\": rpc error: code = NotFound desc = could not find container \"6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6\": container with ID starting with 6ccf76ea0fbba9457283f22dbba27c52fdfa85db28e52ec6e2a7c9902b1e13b6 not found: ID does not exist" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.383964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-config\") pod \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.384017 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-nb\") pod \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.384059 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-sb\") pod \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.384209 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l8fj\" (UniqueName: \"kubernetes.io/projected/3574166a-39e5-4b93-bdfc-ecef1a067f5c-kube-api-access-9l8fj\") pod \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.384232 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-dns-svc\") pod \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\" (UID: \"3574166a-39e5-4b93-bdfc-ecef1a067f5c\") " Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.406652 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3574166a-39e5-4b93-bdfc-ecef1a067f5c-kube-api-access-9l8fj" (OuterVolumeSpecName: "kube-api-access-9l8fj") pod "3574166a-39e5-4b93-bdfc-ecef1a067f5c" (UID: "3574166a-39e5-4b93-bdfc-ecef1a067f5c"). InnerVolumeSpecName "kube-api-access-9l8fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.431115 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-config" (OuterVolumeSpecName: "config") pod "3574166a-39e5-4b93-bdfc-ecef1a067f5c" (UID: "3574166a-39e5-4b93-bdfc-ecef1a067f5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.443766 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3574166a-39e5-4b93-bdfc-ecef1a067f5c" (UID: "3574166a-39e5-4b93-bdfc-ecef1a067f5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.445029 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3574166a-39e5-4b93-bdfc-ecef1a067f5c" (UID: "3574166a-39e5-4b93-bdfc-ecef1a067f5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.450160 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3574166a-39e5-4b93-bdfc-ecef1a067f5c" (UID: "3574166a-39e5-4b93-bdfc-ecef1a067f5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.485772 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l8fj\" (UniqueName: \"kubernetes.io/projected/3574166a-39e5-4b93-bdfc-ecef1a067f5c-kube-api-access-9l8fj\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.485812 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.485834 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.485847 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.485855 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3574166a-39e5-4b93-bdfc-ecef1a067f5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.645392 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78696b78bf-8w2t9"] Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.653673 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78696b78bf-8w2t9"] Feb 19 14:45:13 crc kubenswrapper[4861]: I0219 14:45:13.986012 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" path="/var/lib/kubelet/pods/3574166a-39e5-4b93-bdfc-ecef1a067f5c/volumes" Feb 19 14:45:19 crc kubenswrapper[4861]: I0219 14:45:19.358996 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 14:45:33 crc kubenswrapper[4861]: I0219 14:45:33.835118 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:45:33 crc kubenswrapper[4861]: I0219 14:45:33.835636 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.917588 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:34 crc kubenswrapper[4861]: E0219 14:45:34.917916 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerName="init" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.917928 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerName="init" Feb 19 14:45:34 crc kubenswrapper[4861]: E0219 14:45:34.917938 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerName="dnsmasq-dns" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.917943 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerName="dnsmasq-dns" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.918120 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3574166a-39e5-4b93-bdfc-ecef1a067f5c" containerName="dnsmasq-dns" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.918949 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.922920 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.931822 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.969710 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.969777 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdptq\" (UniqueName: \"kubernetes.io/projected/ff128094-c5a1-49e9-a84f-4caa894ef482-kube-api-access-wdptq\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.969819 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.969850 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.969894 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff128094-c5a1-49e9-a84f-4caa894ef482-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:34 crc kubenswrapper[4861]: I0219 14:45:34.969932 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.071834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.071906 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.072002 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff128094-c5a1-49e9-a84f-4caa894ef482-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.072079 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.072142 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.072173 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdptq\" (UniqueName: \"kubernetes.io/projected/ff128094-c5a1-49e9-a84f-4caa894ef482-kube-api-access-wdptq\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.073142 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff128094-c5a1-49e9-a84f-4caa894ef482-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.077823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.078644 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.078717 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.083206 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.093831 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdptq\" (UniqueName: \"kubernetes.io/projected/ff128094-c5a1-49e9-a84f-4caa894ef482-kube-api-access-wdptq\") pod \"cinder-scheduler-0\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.234720 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 14:45:35 crc kubenswrapper[4861]: I0219 14:45:35.721505 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.389399 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.389878 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api-log" containerID="cri-o://e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819" gracePeriod=30 Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.390261 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api" containerID="cri-o://5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f" gracePeriod=30 Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.579009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff128094-c5a1-49e9-a84f-4caa894ef482","Type":"ContainerStarted","Data":"c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0"} Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.579292 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff128094-c5a1-49e9-a84f-4caa894ef482","Type":"ContainerStarted","Data":"709f79ae7d41cd447fe18a9ae1e9e1e4313febc501954531d32fdc7f59a7e56f"} Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.581908 4861 generic.go:334] "Generic (PLEG): container finished" podID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerID="e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819" exitCode=143 Feb 19 14:45:36 crc kubenswrapper[4861]: I0219 14:45:36.581962 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b","Type":"ContainerDied","Data":"e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819"} Feb 19 14:45:37 crc kubenswrapper[4861]: I0219 14:45:37.593829 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff128094-c5a1-49e9-a84f-4caa894ef482","Type":"ContainerStarted","Data":"da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc"} Feb 19 14:45:37 crc kubenswrapper[4861]: I0219 14:45:37.618279 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.61825475 podStartE2EDuration="3.61825475s" podCreationTimestamp="2026-02-19 14:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:45:37.613866862 +0000 UTC m=+5752.274970100" watchObservedRunningTime="2026-02-19 14:45:37.61825475 +0000 UTC m=+5752.279357988" Feb 19 14:45:39 crc kubenswrapper[4861]: I0219 14:45:39.537947 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.60:8776/healthcheck\": read tcp 10.217.0.2:33086->10.217.1.60:8776: read: connection reset by peer" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.029213 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.073871 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn7h4\" (UniqueName: \"kubernetes.io/projected/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-kube-api-access-tn7h4\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.073989 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-combined-ca-bundle\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074014 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-public-tls-certs\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074051 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-scripts\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074094 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074148 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-logs\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074191 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data-custom\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074225 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-internal-tls-certs\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.074253 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-etc-machine-id\") pod \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\" (UID: \"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b\") " Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.085028 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.092330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-logs" (OuterVolumeSpecName: "logs") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.100382 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-scripts" (OuterVolumeSpecName: "scripts") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.100503 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.113223 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-kube-api-access-tn7h4" (OuterVolumeSpecName: "kube-api-access-tn7h4") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "kube-api-access-tn7h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.135302 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.158184 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data" (OuterVolumeSpecName: "config-data") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.166660 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175864 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175896 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175907 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175915 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175924 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175932 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175940 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.175949 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn7h4\" (UniqueName: \"kubernetes.io/projected/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-kube-api-access-tn7h4\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.184528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" (UID: "b4b3247a-c12c-4ba2-a17a-10de0bf29f6b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.235687 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.278010 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.623363 4861 generic.go:334] "Generic (PLEG): container finished" podID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerID="5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f" exitCode=0 Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.623453 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.623462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b","Type":"ContainerDied","Data":"5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f"} Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.623542 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4b3247a-c12c-4ba2-a17a-10de0bf29f6b","Type":"ContainerDied","Data":"5bd507454200f4580baadb57a8ef3b9613442f3308de9faa97dc93cd7c362159"} Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.623573 4861 scope.go:117] "RemoveContainer" containerID="5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.682025 4861 scope.go:117] "RemoveContainer" containerID="e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.700033 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.708543 4861 scope.go:117] "RemoveContainer" containerID="5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f" Feb 19 14:45:40 crc kubenswrapper[4861]: E0219 14:45:40.709099 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f\": container with ID starting with 5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f not found: ID does not exist" containerID="5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.709140 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f"} err="failed to get container status \"5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f\": rpc error: code = NotFound desc = could not find container \"5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f\": container with ID starting with 5ed537253e47b8d478d5f159f95673082a3bb2f7840039bb1e42a3b3de15079f not found: ID does not exist" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.709167 4861 scope.go:117] "RemoveContainer" containerID="e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819" Feb 19 14:45:40 crc kubenswrapper[4861]: E0219 14:45:40.709629 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819\": container with ID starting with e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819 not found: ID does not exist" containerID="e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.709681 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819"} err="failed to get container status \"e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819\": rpc error: code = NotFound desc = could not find container \"e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819\": container with ID starting with e23e1c4d7beb19256a4bcbc57058e9fe7cacf228a594d2a1555b250902a08819 not found: ID does not exist" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.713031 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.731517 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:40 crc kubenswrapper[4861]: E0219 14:45:40.731869 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api-log" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.731886 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api-log" Feb 19 14:45:40 crc kubenswrapper[4861]: E0219 14:45:40.731901 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.731907 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.732224 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api-log" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.732243 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" containerName="cinder-api" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.733453 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.737542 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.737774 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.737970 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.757665 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785083 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785239 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785345 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-config-data-custom\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785384 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785550 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-config-data\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785574 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27466fa7-89f2-400e-9baa-0f05e1450feb-logs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785616 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh28f\" (UniqueName: \"kubernetes.io/projected/27466fa7-89f2-400e-9baa-0f05e1450feb-kube-api-access-wh28f\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785730 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27466fa7-89f2-400e-9baa-0f05e1450feb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.785805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-scripts\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887337 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-config-data\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27466fa7-89f2-400e-9baa-0f05e1450feb-logs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887430 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh28f\" (UniqueName: \"kubernetes.io/projected/27466fa7-89f2-400e-9baa-0f05e1450feb-kube-api-access-wh28f\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887476 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27466fa7-89f2-400e-9baa-0f05e1450feb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887511 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-scripts\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887531 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887559 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887591 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-config-data-custom\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887612 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887774 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27466fa7-89f2-400e-9baa-0f05e1450feb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.887799 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27466fa7-89f2-400e-9baa-0f05e1450feb-logs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.893109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.893149 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.893178 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-config-data-custom\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.899316 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.903594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-config-data\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.903966 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27466fa7-89f2-400e-9baa-0f05e1450feb-scripts\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:40 crc kubenswrapper[4861]: I0219 14:45:40.907145 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh28f\" (UniqueName: \"kubernetes.io/projected/27466fa7-89f2-400e-9baa-0f05e1450feb-kube-api-access-wh28f\") pod \"cinder-api-0\" (UID: \"27466fa7-89f2-400e-9baa-0f05e1450feb\") " pod="openstack/cinder-api-0" Feb 19 14:45:41 crc kubenswrapper[4861]: I0219 14:45:41.065955 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 14:45:41 crc kubenswrapper[4861]: I0219 14:45:41.561888 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 14:45:41 crc kubenswrapper[4861]: I0219 14:45:41.634156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27466fa7-89f2-400e-9baa-0f05e1450feb","Type":"ContainerStarted","Data":"91ade7fe4b91481dd4b530aef74fb9b852314e46fd40e7af6f9d4c9a052387df"} Feb 19 14:45:41 crc kubenswrapper[4861]: I0219 14:45:41.993073 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b3247a-c12c-4ba2-a17a-10de0bf29f6b" path="/var/lib/kubelet/pods/b4b3247a-c12c-4ba2-a17a-10de0bf29f6b/volumes" Feb 19 14:45:42 crc kubenswrapper[4861]: I0219 14:45:42.647808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27466fa7-89f2-400e-9baa-0f05e1450feb","Type":"ContainerStarted","Data":"d298628670e5de6664b1d05fb27c95cf32b0e83c05686c044b47efcff6c884a6"} Feb 19 14:45:43 crc kubenswrapper[4861]: I0219 14:45:43.658971 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27466fa7-89f2-400e-9baa-0f05e1450feb","Type":"ContainerStarted","Data":"26f268ead9e61e62cf01b600e2264a5a4d0535a4fe2c5cd01b5d0fd6292c04e8"} Feb 19 14:45:43 crc kubenswrapper[4861]: I0219 14:45:43.659220 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 14:45:43 crc kubenswrapper[4861]: I0219 14:45:43.689438 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.689395207 podStartE2EDuration="3.689395207s" podCreationTimestamp="2026-02-19 14:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:45:43.685649777 +0000 UTC m=+5758.346753005" watchObservedRunningTime="2026-02-19 14:45:43.689395207 +0000 UTC m=+5758.350498435" Feb 19 14:45:45 crc kubenswrapper[4861]: I0219 14:45:45.436086 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 14:45:45 crc kubenswrapper[4861]: I0219 14:45:45.503198 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:45 crc kubenswrapper[4861]: I0219 14:45:45.674626 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="cinder-scheduler" containerID="cri-o://c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0" gracePeriod=30 Feb 19 14:45:45 crc kubenswrapper[4861]: I0219 14:45:45.674800 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="probe" containerID="cri-o://da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc" gracePeriod=30 Feb 19 14:45:46 crc kubenswrapper[4861]: I0219 14:45:46.690569 4861 generic.go:334] "Generic (PLEG): container finished" podID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerID="da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc" exitCode=0 Feb 19 14:45:46 crc kubenswrapper[4861]: I0219 14:45:46.690695 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff128094-c5a1-49e9-a84f-4caa894ef482","Type":"ContainerDied","Data":"da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc"} Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.583962 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.627720 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-combined-ca-bundle\") pod \"ff128094-c5a1-49e9-a84f-4caa894ef482\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.627800 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data\") pod \"ff128094-c5a1-49e9-a84f-4caa894ef482\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.627902 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-scripts\") pod \"ff128094-c5a1-49e9-a84f-4caa894ef482\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.627974 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data-custom\") pod \"ff128094-c5a1-49e9-a84f-4caa894ef482\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.628018 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff128094-c5a1-49e9-a84f-4caa894ef482-etc-machine-id\") pod \"ff128094-c5a1-49e9-a84f-4caa894ef482\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.628036 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdptq\" (UniqueName: \"kubernetes.io/projected/ff128094-c5a1-49e9-a84f-4caa894ef482-kube-api-access-wdptq\") pod \"ff128094-c5a1-49e9-a84f-4caa894ef482\" (UID: \"ff128094-c5a1-49e9-a84f-4caa894ef482\") " Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.629595 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff128094-c5a1-49e9-a84f-4caa894ef482-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff128094-c5a1-49e9-a84f-4caa894ef482" (UID: "ff128094-c5a1-49e9-a84f-4caa894ef482"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.633091 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff128094-c5a1-49e9-a84f-4caa894ef482-kube-api-access-wdptq" (OuterVolumeSpecName: "kube-api-access-wdptq") pod "ff128094-c5a1-49e9-a84f-4caa894ef482" (UID: "ff128094-c5a1-49e9-a84f-4caa894ef482"). InnerVolumeSpecName "kube-api-access-wdptq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.633185 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-scripts" (OuterVolumeSpecName: "scripts") pod "ff128094-c5a1-49e9-a84f-4caa894ef482" (UID: "ff128094-c5a1-49e9-a84f-4caa894ef482"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.633249 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff128094-c5a1-49e9-a84f-4caa894ef482" (UID: "ff128094-c5a1-49e9-a84f-4caa894ef482"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.684352 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff128094-c5a1-49e9-a84f-4caa894ef482" (UID: "ff128094-c5a1-49e9-a84f-4caa894ef482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.699688 4861 generic.go:334] "Generic (PLEG): container finished" podID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerID="c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0" exitCode=0 Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.699743 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff128094-c5a1-49e9-a84f-4caa894ef482","Type":"ContainerDied","Data":"c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0"} Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.699775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff128094-c5a1-49e9-a84f-4caa894ef482","Type":"ContainerDied","Data":"709f79ae7d41cd447fe18a9ae1e9e1e4313febc501954531d32fdc7f59a7e56f"} Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.699795 4861 scope.go:117] "RemoveContainer" containerID="da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.699935 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.730912 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.730943 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.730952 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.730961 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff128094-c5a1-49e9-a84f-4caa894ef482-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.730969 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdptq\" (UniqueName: \"kubernetes.io/projected/ff128094-c5a1-49e9-a84f-4caa894ef482-kube-api-access-wdptq\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.748019 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data" (OuterVolumeSpecName: "config-data") pod "ff128094-c5a1-49e9-a84f-4caa894ef482" (UID: "ff128094-c5a1-49e9-a84f-4caa894ef482"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.767977 4861 scope.go:117] "RemoveContainer" containerID="c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.800936 4861 scope.go:117] "RemoveContainer" containerID="da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc" Feb 19 14:45:47 crc kubenswrapper[4861]: E0219 14:45:47.801829 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc\": container with ID starting with da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc not found: ID does not exist" containerID="da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.801874 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc"} err="failed to get container status \"da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc\": rpc error: code = NotFound desc = could not find container \"da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc\": container with ID starting with da07511ceca9a806a0dd7202f6ebe2d5494bc97e3c1a67f768474767ae32ffcc not found: ID does not exist" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.801904 4861 scope.go:117] "RemoveContainer" containerID="c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0" Feb 19 14:45:47 crc kubenswrapper[4861]: E0219 14:45:47.802377 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0\": container with ID starting with c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0 not found: ID does not exist" containerID="c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.802398 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0"} err="failed to get container status \"c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0\": rpc error: code = NotFound desc = could not find container \"c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0\": container with ID starting with c16552007a21a0275babf99499b03b733a0cc1d2a327d406573f3ebffe2c80f0 not found: ID does not exist" Feb 19 14:45:47 crc kubenswrapper[4861]: I0219 14:45:47.832876 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff128094-c5a1-49e9-a84f-4caa894ef482-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.072142 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.094339 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.104067 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:48 crc kubenswrapper[4861]: E0219 14:45:48.104586 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="cinder-scheduler" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.104609 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="cinder-scheduler" Feb 19 14:45:48 crc kubenswrapper[4861]: E0219 14:45:48.104634 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="probe" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.104643 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="probe" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.104878 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="cinder-scheduler" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.104902 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" containerName="probe" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.106044 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.110748 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.110803 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.140214 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-scripts\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.140314 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.140364 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.140386 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnvx\" (UniqueName: \"kubernetes.io/projected/64c54201-87f2-4db9-8ce9-e1023ac576b1-kube-api-access-xjnvx\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.140527 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-config-data\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.140619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c54201-87f2-4db9-8ce9-e1023ac576b1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.242329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.242392 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.242413 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnvx\" (UniqueName: \"kubernetes.io/projected/64c54201-87f2-4db9-8ce9-e1023ac576b1-kube-api-access-xjnvx\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.242474 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-config-data\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.242524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c54201-87f2-4db9-8ce9-e1023ac576b1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.242552 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-scripts\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.243759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c54201-87f2-4db9-8ce9-e1023ac576b1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.246680 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.247172 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-scripts\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.247564 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-config-data\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.247685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64c54201-87f2-4db9-8ce9-e1023ac576b1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.264137 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnvx\" (UniqueName: \"kubernetes.io/projected/64c54201-87f2-4db9-8ce9-e1023ac576b1-kube-api-access-xjnvx\") pod \"cinder-scheduler-0\" (UID: \"64c54201-87f2-4db9-8ce9-e1023ac576b1\") " pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.441103 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 14:45:48 crc kubenswrapper[4861]: I0219 14:45:48.936542 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 14:45:48 crc kubenswrapper[4861]: W0219 14:45:48.946877 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c54201_87f2_4db9_8ce9_e1023ac576b1.slice/crio-67538775fa161678d3bc0ce00e30f151eb36b37ed503c05f01088cf7ac21a557 WatchSource:0}: Error finding container 67538775fa161678d3bc0ce00e30f151eb36b37ed503c05f01088cf7ac21a557: Status 404 returned error can't find the container with id 67538775fa161678d3bc0ce00e30f151eb36b37ed503c05f01088cf7ac21a557 Feb 19 14:45:49 crc kubenswrapper[4861]: I0219 14:45:49.729210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64c54201-87f2-4db9-8ce9-e1023ac576b1","Type":"ContainerStarted","Data":"abe500bbb1f0506542890b5c470a52620a2d63f0cd06cad182080284dc777b81"} Feb 19 14:45:49 crc kubenswrapper[4861]: I0219 14:45:49.729473 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64c54201-87f2-4db9-8ce9-e1023ac576b1","Type":"ContainerStarted","Data":"67538775fa161678d3bc0ce00e30f151eb36b37ed503c05f01088cf7ac21a557"} Feb 19 14:45:50 crc kubenswrapper[4861]: I0219 14:45:50.008355 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff128094-c5a1-49e9-a84f-4caa894ef482" path="/var/lib/kubelet/pods/ff128094-c5a1-49e9-a84f-4caa894ef482/volumes" Feb 19 14:45:50 crc kubenswrapper[4861]: I0219 14:45:50.739959 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"64c54201-87f2-4db9-8ce9-e1023ac576b1","Type":"ContainerStarted","Data":"1d0f0e252d0bbe30b7042ea2bc2974d65e20599deced434182a56e6b2b03bc1e"} Feb 19 14:45:50 crc kubenswrapper[4861]: I0219 14:45:50.772014 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.771993617 podStartE2EDuration="2.771993617s" podCreationTimestamp="2026-02-19 14:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:45:50.762573625 +0000 UTC m=+5765.423676893" watchObservedRunningTime="2026-02-19 14:45:50.771993617 +0000 UTC m=+5765.433096845" Feb 19 14:45:52 crc kubenswrapper[4861]: I0219 14:45:52.771804 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 14:45:53 crc kubenswrapper[4861]: I0219 14:45:53.441687 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 14:45:58 crc kubenswrapper[4861]: I0219 14:45:58.642202 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.710937 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4647b"] Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.712678 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.732276 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4647b"] Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.788406 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e42b279-7a4e-4c32-a97a-092a2812d883-operator-scripts\") pod \"glance-db-create-4647b\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.788530 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtd6q\" (UniqueName: \"kubernetes.io/projected/0e42b279-7a4e-4c32-a97a-092a2812d883-kube-api-access-qtd6q\") pod \"glance-db-create-4647b\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.812610 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3a12-account-create-update-ttzsw"] Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.814254 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.817701 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.825343 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3a12-account-create-update-ttzsw"] Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.890586 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e42b279-7a4e-4c32-a97a-092a2812d883-operator-scripts\") pod \"glance-db-create-4647b\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.890647 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2990bb-b686-400b-bddf-5d4bd0e0540d-operator-scripts\") pod \"glance-3a12-account-create-update-ttzsw\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.890703 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtd6q\" (UniqueName: \"kubernetes.io/projected/0e42b279-7a4e-4c32-a97a-092a2812d883-kube-api-access-qtd6q\") pod \"glance-db-create-4647b\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.890722 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7sg5\" (UniqueName: \"kubernetes.io/projected/ed2990bb-b686-400b-bddf-5d4bd0e0540d-kube-api-access-z7sg5\") pod \"glance-3a12-account-create-update-ttzsw\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.891367 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e42b279-7a4e-4c32-a97a-092a2812d883-operator-scripts\") pod \"glance-db-create-4647b\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.911088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtd6q\" (UniqueName: \"kubernetes.io/projected/0e42b279-7a4e-4c32-a97a-092a2812d883-kube-api-access-qtd6q\") pod \"glance-db-create-4647b\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " pod="openstack/glance-db-create-4647b" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.991959 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7sg5\" (UniqueName: \"kubernetes.io/projected/ed2990bb-b686-400b-bddf-5d4bd0e0540d-kube-api-access-z7sg5\") pod \"glance-3a12-account-create-update-ttzsw\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.992097 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2990bb-b686-400b-bddf-5d4bd0e0540d-operator-scripts\") pod \"glance-3a12-account-create-update-ttzsw\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:45:59 crc kubenswrapper[4861]: I0219 14:45:59.992778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2990bb-b686-400b-bddf-5d4bd0e0540d-operator-scripts\") pod \"glance-3a12-account-create-update-ttzsw\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.008234 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7sg5\" (UniqueName: \"kubernetes.io/projected/ed2990bb-b686-400b-bddf-5d4bd0e0540d-kube-api-access-z7sg5\") pod \"glance-3a12-account-create-update-ttzsw\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.048545 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4647b" Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.128261 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.479478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4647b"] Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.593573 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3a12-account-create-update-ttzsw"] Feb 19 14:46:00 crc kubenswrapper[4861]: W0219 14:46:00.595984 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded2990bb_b686_400b_bddf_5d4bd0e0540d.slice/crio-81f553e13e5493051d3aefe16ff81e01f76b1462e3d5c6c8ce0ff905e3861e5d WatchSource:0}: Error finding container 81f553e13e5493051d3aefe16ff81e01f76b1462e3d5c6c8ce0ff905e3861e5d: Status 404 returned error can't find the container with id 81f553e13e5493051d3aefe16ff81e01f76b1462e3d5c6c8ce0ff905e3861e5d Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.866036 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a12-account-create-update-ttzsw" event={"ID":"ed2990bb-b686-400b-bddf-5d4bd0e0540d","Type":"ContainerStarted","Data":"368b35b799fba08bbb76bba5e4694653585ceb0527f589f4d91887b0e2cbfdfe"} Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.866240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a12-account-create-update-ttzsw" event={"ID":"ed2990bb-b686-400b-bddf-5d4bd0e0540d","Type":"ContainerStarted","Data":"81f553e13e5493051d3aefe16ff81e01f76b1462e3d5c6c8ce0ff905e3861e5d"} Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.868209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4647b" event={"ID":"0e42b279-7a4e-4c32-a97a-092a2812d883","Type":"ContainerStarted","Data":"12d16936ae933e2248a1dc3f68ce378fc5410d009a1d5699d9af48641ecfc069"} Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.868235 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4647b" event={"ID":"0e42b279-7a4e-4c32-a97a-092a2812d883","Type":"ContainerStarted","Data":"9c7da0e2777df4d3dbdb86eac916a2fa61d0ae1295f80652608142df3a71236f"} Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.894943 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3a12-account-create-update-ttzsw" podStartSLOduration=1.894924444 podStartE2EDuration="1.894924444s" podCreationTimestamp="2026-02-19 14:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:00.892924711 +0000 UTC m=+5775.554027979" watchObservedRunningTime="2026-02-19 14:46:00.894924444 +0000 UTC m=+5775.556027672" Feb 19 14:46:00 crc kubenswrapper[4861]: I0219 14:46:00.918544 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-4647b" podStartSLOduration=1.918517187 podStartE2EDuration="1.918517187s" podCreationTimestamp="2026-02-19 14:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:00.910991865 +0000 UTC m=+5775.572095093" watchObservedRunningTime="2026-02-19 14:46:00.918517187 +0000 UTC m=+5775.579620455" Feb 19 14:46:01 crc kubenswrapper[4861]: I0219 14:46:01.879693 4861 generic.go:334] "Generic (PLEG): container finished" podID="ed2990bb-b686-400b-bddf-5d4bd0e0540d" containerID="368b35b799fba08bbb76bba5e4694653585ceb0527f589f4d91887b0e2cbfdfe" exitCode=0 Feb 19 14:46:01 crc kubenswrapper[4861]: I0219 14:46:01.879762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a12-account-create-update-ttzsw" event={"ID":"ed2990bb-b686-400b-bddf-5d4bd0e0540d","Type":"ContainerDied","Data":"368b35b799fba08bbb76bba5e4694653585ceb0527f589f4d91887b0e2cbfdfe"} Feb 19 14:46:01 crc kubenswrapper[4861]: I0219 14:46:01.883413 4861 generic.go:334] "Generic (PLEG): container finished" podID="0e42b279-7a4e-4c32-a97a-092a2812d883" containerID="12d16936ae933e2248a1dc3f68ce378fc5410d009a1d5699d9af48641ecfc069" exitCode=0 Feb 19 14:46:01 crc kubenswrapper[4861]: I0219 14:46:01.883483 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4647b" event={"ID":"0e42b279-7a4e-4c32-a97a-092a2812d883","Type":"ContainerDied","Data":"12d16936ae933e2248a1dc3f68ce378fc5410d009a1d5699d9af48641ecfc069"} Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.307511 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4647b" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.359413 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtd6q\" (UniqueName: \"kubernetes.io/projected/0e42b279-7a4e-4c32-a97a-092a2812d883-kube-api-access-qtd6q\") pod \"0e42b279-7a4e-4c32-a97a-092a2812d883\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.359506 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e42b279-7a4e-4c32-a97a-092a2812d883-operator-scripts\") pod \"0e42b279-7a4e-4c32-a97a-092a2812d883\" (UID: \"0e42b279-7a4e-4c32-a97a-092a2812d883\") " Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.361249 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e42b279-7a4e-4c32-a97a-092a2812d883-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e42b279-7a4e-4c32-a97a-092a2812d883" (UID: "0e42b279-7a4e-4c32-a97a-092a2812d883"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.366373 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e42b279-7a4e-4c32-a97a-092a2812d883-kube-api-access-qtd6q" (OuterVolumeSpecName: "kube-api-access-qtd6q") pod "0e42b279-7a4e-4c32-a97a-092a2812d883" (UID: "0e42b279-7a4e-4c32-a97a-092a2812d883"). InnerVolumeSpecName "kube-api-access-qtd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.423076 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.462465 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtd6q\" (UniqueName: \"kubernetes.io/projected/0e42b279-7a4e-4c32-a97a-092a2812d883-kube-api-access-qtd6q\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.462495 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e42b279-7a4e-4c32-a97a-092a2812d883-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.564038 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2990bb-b686-400b-bddf-5d4bd0e0540d-operator-scripts\") pod \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.564112 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7sg5\" (UniqueName: \"kubernetes.io/projected/ed2990bb-b686-400b-bddf-5d4bd0e0540d-kube-api-access-z7sg5\") pod \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\" (UID: \"ed2990bb-b686-400b-bddf-5d4bd0e0540d\") " Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.566948 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2990bb-b686-400b-bddf-5d4bd0e0540d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed2990bb-b686-400b-bddf-5d4bd0e0540d" (UID: "ed2990bb-b686-400b-bddf-5d4bd0e0540d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.570470 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2990bb-b686-400b-bddf-5d4bd0e0540d-kube-api-access-z7sg5" (OuterVolumeSpecName: "kube-api-access-z7sg5") pod "ed2990bb-b686-400b-bddf-5d4bd0e0540d" (UID: "ed2990bb-b686-400b-bddf-5d4bd0e0540d"). InnerVolumeSpecName "kube-api-access-z7sg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.666909 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2990bb-b686-400b-bddf-5d4bd0e0540d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.666946 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7sg5\" (UniqueName: \"kubernetes.io/projected/ed2990bb-b686-400b-bddf-5d4bd0e0540d-kube-api-access-z7sg5\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.760142 4861 scope.go:117] "RemoveContainer" containerID="5d8f659b8cdb944567b3d3ad2fdb465a5d2a23eb4ec599744d1e124836cca6d0" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.791405 4861 scope.go:117] "RemoveContainer" containerID="26e1ec544c1003d5dfc3cb8e5c17e9edc65a625143a299d4552fc87ac9c4b6d9" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.834609 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.834768 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.919520 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a12-account-create-update-ttzsw" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.919521 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a12-account-create-update-ttzsw" event={"ID":"ed2990bb-b686-400b-bddf-5d4bd0e0540d","Type":"ContainerDied","Data":"81f553e13e5493051d3aefe16ff81e01f76b1462e3d5c6c8ce0ff905e3861e5d"} Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.919681 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f553e13e5493051d3aefe16ff81e01f76b1462e3d5c6c8ce0ff905e3861e5d" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.931903 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4647b" event={"ID":"0e42b279-7a4e-4c32-a97a-092a2812d883","Type":"ContainerDied","Data":"9c7da0e2777df4d3dbdb86eac916a2fa61d0ae1295f80652608142df3a71236f"} Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.931965 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c7da0e2777df4d3dbdb86eac916a2fa61d0ae1295f80652608142df3a71236f" Feb 19 14:46:03 crc kubenswrapper[4861]: I0219 14:46:03.931970 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4647b" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.049619 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dx6qp"] Feb 19 14:46:05 crc kubenswrapper[4861]: E0219 14:46:05.050536 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e42b279-7a4e-4c32-a97a-092a2812d883" containerName="mariadb-database-create" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.050559 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e42b279-7a4e-4c32-a97a-092a2812d883" containerName="mariadb-database-create" Feb 19 14:46:05 crc kubenswrapper[4861]: E0219 14:46:05.050590 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2990bb-b686-400b-bddf-5d4bd0e0540d" containerName="mariadb-account-create-update" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.050602 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2990bb-b686-400b-bddf-5d4bd0e0540d" containerName="mariadb-account-create-update" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.050882 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2990bb-b686-400b-bddf-5d4bd0e0540d" containerName="mariadb-account-create-update" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.050928 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e42b279-7a4e-4c32-a97a-092a2812d883" containerName="mariadb-database-create" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.051766 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.055312 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t7qfz" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.055545 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.076077 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dx6qp"] Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.213218 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-config-data\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.213339 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-combined-ca-bundle\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.213736 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-db-sync-config-data\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.213828 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2vk\" (UniqueName: \"kubernetes.io/projected/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-kube-api-access-pt2vk\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.315852 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-combined-ca-bundle\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.316011 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-db-sync-config-data\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.316047 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2vk\" (UniqueName: \"kubernetes.io/projected/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-kube-api-access-pt2vk\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.316084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-config-data\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.321634 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-db-sync-config-data\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.326031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-combined-ca-bundle\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.345850 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-config-data\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.349295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2vk\" (UniqueName: \"kubernetes.io/projected/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-kube-api-access-pt2vk\") pod \"glance-db-sync-dx6qp\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.390664 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:05 crc kubenswrapper[4861]: I0219 14:46:05.950698 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dx6qp"] Feb 19 14:46:06 crc kubenswrapper[4861]: I0219 14:46:06.962980 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dx6qp" event={"ID":"ee47b3de-49e7-4496-9c6f-3ddcebf5e933","Type":"ContainerStarted","Data":"523df994df4f0738f489e91b8585f738e3e6a1dffa3d7d9e2468c7dcb0679ae8"} Feb 19 14:46:06 crc kubenswrapper[4861]: I0219 14:46:06.964676 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dx6qp" event={"ID":"ee47b3de-49e7-4496-9c6f-3ddcebf5e933","Type":"ContainerStarted","Data":"3e5944ae8962f225f9153923f7cee6d292a5817f684e061171bb6c3255c67941"} Feb 19 14:46:06 crc kubenswrapper[4861]: I0219 14:46:06.988033 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dx6qp" podStartSLOduration=1.98798878 podStartE2EDuration="1.98798878s" podCreationTimestamp="2026-02-19 14:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:06.978136686 +0000 UTC m=+5781.639239924" watchObservedRunningTime="2026-02-19 14:46:06.98798878 +0000 UTC m=+5781.649092018" Feb 19 14:46:10 crc kubenswrapper[4861]: I0219 14:46:10.014279 4861 generic.go:334] "Generic (PLEG): container finished" podID="ee47b3de-49e7-4496-9c6f-3ddcebf5e933" containerID="523df994df4f0738f489e91b8585f738e3e6a1dffa3d7d9e2468c7dcb0679ae8" exitCode=0 Feb 19 14:46:10 crc kubenswrapper[4861]: I0219 14:46:10.014399 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dx6qp" event={"ID":"ee47b3de-49e7-4496-9c6f-3ddcebf5e933","Type":"ContainerDied","Data":"523df994df4f0738f489e91b8585f738e3e6a1dffa3d7d9e2468c7dcb0679ae8"} Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.562780 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.650952 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-db-sync-config-data\") pod \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.651105 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-config-data\") pod \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.651179 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-combined-ca-bundle\") pod \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.651353 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt2vk\" (UniqueName: \"kubernetes.io/projected/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-kube-api-access-pt2vk\") pod \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\" (UID: \"ee47b3de-49e7-4496-9c6f-3ddcebf5e933\") " Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.659991 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ee47b3de-49e7-4496-9c6f-3ddcebf5e933" (UID: "ee47b3de-49e7-4496-9c6f-3ddcebf5e933"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.663697 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-kube-api-access-pt2vk" (OuterVolumeSpecName: "kube-api-access-pt2vk") pod "ee47b3de-49e7-4496-9c6f-3ddcebf5e933" (UID: "ee47b3de-49e7-4496-9c6f-3ddcebf5e933"). InnerVolumeSpecName "kube-api-access-pt2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.697530 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee47b3de-49e7-4496-9c6f-3ddcebf5e933" (UID: "ee47b3de-49e7-4496-9c6f-3ddcebf5e933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.735417 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-config-data" (OuterVolumeSpecName: "config-data") pod "ee47b3de-49e7-4496-9c6f-3ddcebf5e933" (UID: "ee47b3de-49e7-4496-9c6f-3ddcebf5e933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.755016 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.755069 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.755090 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt2vk\" (UniqueName: \"kubernetes.io/projected/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-kube-api-access-pt2vk\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:11 crc kubenswrapper[4861]: I0219 14:46:11.755111 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ee47b3de-49e7-4496-9c6f-3ddcebf5e933-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.045208 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dx6qp" event={"ID":"ee47b3de-49e7-4496-9c6f-3ddcebf5e933","Type":"ContainerDied","Data":"3e5944ae8962f225f9153923f7cee6d292a5817f684e061171bb6c3255c67941"} Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.045258 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5944ae8962f225f9153923f7cee6d292a5817f684e061171bb6c3255c67941" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.045340 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dx6qp" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.425529 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54cfc9f5fc-d6nbn"] Feb 19 14:46:12 crc kubenswrapper[4861]: E0219 14:46:12.426755 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee47b3de-49e7-4496-9c6f-3ddcebf5e933" containerName="glance-db-sync" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.426779 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee47b3de-49e7-4496-9c6f-3ddcebf5e933" containerName="glance-db-sync" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.426977 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee47b3de-49e7-4496-9c6f-3ddcebf5e933" containerName="glance-db-sync" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.428069 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.450474 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54cfc9f5fc-d6nbn"] Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.484994 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.486467 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.488179 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.488347 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t7qfz" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.489531 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.508015 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.547643 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.548918 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.559094 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.563716 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571402 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-config\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571481 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh4nr\" (UniqueName: \"kubernetes.io/projected/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-kube-api-access-wh4nr\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571507 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571527 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571560 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-nb\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571609 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wktd7\" (UniqueName: \"kubernetes.io/projected/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-kube-api-access-wktd7\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571626 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-dns-svc\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571656 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-sb\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-logs\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.571813 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673148 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-sb\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673202 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-logs\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673228 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673253 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673312 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673365 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-config\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673433 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh4nr\" (UniqueName: \"kubernetes.io/projected/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-kube-api-access-wh4nr\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673456 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673475 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673497 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673536 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673561 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-nb\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673585 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673606 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qp8\" (UniqueName: \"kubernetes.io/projected/7f381671-eec3-43de-9c55-d0ee249fc54c-kube-api-access-c7qp8\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673625 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673648 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-dns-svc\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673666 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wktd7\" (UniqueName: \"kubernetes.io/projected/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-kube-api-access-wktd7\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.673776 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-logs\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.674248 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.674613 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-nb\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.674647 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-config\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.674972 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-dns-svc\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.675027 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-sb\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.679210 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.679516 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.689932 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.691020 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wktd7\" (UniqueName: \"kubernetes.io/projected/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-kube-api-access-wktd7\") pod \"glance-default-external-api-0\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.691068 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh4nr\" (UniqueName: \"kubernetes.io/projected/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-kube-api-access-wh4nr\") pod \"dnsmasq-dns-54cfc9f5fc-d6nbn\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.743350 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775403 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775471 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775514 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7qp8\" (UniqueName: \"kubernetes.io/projected/7f381671-eec3-43de-9c55-d0ee249fc54c-kube-api-access-c7qp8\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775555 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775591 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775985 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.775974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.779090 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.780852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.781315 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.795691 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7qp8\" (UniqueName: \"kubernetes.io/projected/7f381671-eec3-43de-9c55-d0ee249fc54c-kube-api-access-c7qp8\") pod \"glance-default-internal-api-0\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.804930 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:46:12 crc kubenswrapper[4861]: I0219 14:46:12.870749 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:13 crc kubenswrapper[4861]: I0219 14:46:13.250414 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54cfc9f5fc-d6nbn"] Feb 19 14:46:13 crc kubenswrapper[4861]: I0219 14:46:13.348702 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:13 crc kubenswrapper[4861]: I0219 14:46:13.425270 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:13 crc kubenswrapper[4861]: W0219 14:46:13.431577 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd48119_143a_4fa3_89ce_31c8a4ece5e9.slice/crio-d52cc9c3d02c548e4ec1765aaf5a4eeee3f3a763f5a3c9079331a2d467ea12ea WatchSource:0}: Error finding container d52cc9c3d02c548e4ec1765aaf5a4eeee3f3a763f5a3c9079331a2d467ea12ea: Status 404 returned error can't find the container with id d52cc9c3d02c548e4ec1765aaf5a4eeee3f3a763f5a3c9079331a2d467ea12ea Feb 19 14:46:13 crc kubenswrapper[4861]: I0219 14:46:13.518963 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:13 crc kubenswrapper[4861]: W0219 14:46:13.548558 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f381671_eec3_43de_9c55_d0ee249fc54c.slice/crio-a3b44ed1ad10139cf9dd956d97b122a2e4e60129c57067b0961dbe83846bce63 WatchSource:0}: Error finding container a3b44ed1ad10139cf9dd956d97b122a2e4e60129c57067b0961dbe83846bce63: Status 404 returned error can't find the container with id a3b44ed1ad10139cf9dd956d97b122a2e4e60129c57067b0961dbe83846bce63 Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.067038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f381671-eec3-43de-9c55-d0ee249fc54c","Type":"ContainerStarted","Data":"a3b44ed1ad10139cf9dd956d97b122a2e4e60129c57067b0961dbe83846bce63"} Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.070481 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd48119-143a-4fa3-89ce-31c8a4ece5e9","Type":"ContainerStarted","Data":"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387"} Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.070523 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd48119-143a-4fa3-89ce-31c8a4ece5e9","Type":"ContainerStarted","Data":"d52cc9c3d02c548e4ec1765aaf5a4eeee3f3a763f5a3c9079331a2d467ea12ea"} Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.072691 4861 generic.go:334] "Generic (PLEG): container finished" podID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerID="e0fa03fcfd189e60661eb3a0dadecdbf3bbaa8b8d2457af8804710563faf36e8" exitCode=0 Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.072723 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" event={"ID":"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3","Type":"ContainerDied","Data":"e0fa03fcfd189e60661eb3a0dadecdbf3bbaa8b8d2457af8804710563faf36e8"} Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.072740 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" event={"ID":"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3","Type":"ContainerStarted","Data":"9da8baeb312e95b526c6ac26f2e3d70752f12fa8553fdf0bb09454f3b077a1e8"} Feb 19 14:46:14 crc kubenswrapper[4861]: I0219 14:46:14.717225 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.096395 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f381671-eec3-43de-9c55-d0ee249fc54c","Type":"ContainerStarted","Data":"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408"} Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.096653 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-log" containerID="cri-o://2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b" gracePeriod=30 Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.096764 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-httpd" containerID="cri-o://1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408" gracePeriod=30 Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.096666 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f381671-eec3-43de-9c55-d0ee249fc54c","Type":"ContainerStarted","Data":"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b"} Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.100666 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd48119-143a-4fa3-89ce-31c8a4ece5e9","Type":"ContainerStarted","Data":"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00"} Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.100786 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-log" containerID="cri-o://a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387" gracePeriod=30 Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.100863 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-httpd" containerID="cri-o://fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00" gracePeriod=30 Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.117354 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" event={"ID":"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3","Type":"ContainerStarted","Data":"1a65cf4711d9589efd701aaf40bbad8372a2d68d401d56d3a9a543aeec0609ef"} Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.117569 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.126356 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.126340671 podStartE2EDuration="3.126340671s" podCreationTimestamp="2026-02-19 14:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:15.124188582 +0000 UTC m=+5789.785291820" watchObservedRunningTime="2026-02-19 14:46:15.126340671 +0000 UTC m=+5789.787443899" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.149016 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.148999548 podStartE2EDuration="3.148999548s" podCreationTimestamp="2026-02-19 14:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:15.142149834 +0000 UTC m=+5789.803253062" watchObservedRunningTime="2026-02-19 14:46:15.148999548 +0000 UTC m=+5789.810102776" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.166948 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" podStartSLOduration=3.166923799 podStartE2EDuration="3.166923799s" podCreationTimestamp="2026-02-19 14:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:15.162832579 +0000 UTC m=+5789.823935807" watchObservedRunningTime="2026-02-19 14:46:15.166923799 +0000 UTC m=+5789.828027027" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.623469 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.730351 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-httpd-run\") pod \"7f381671-eec3-43de-9c55-d0ee249fc54c\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.730696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7qp8\" (UniqueName: \"kubernetes.io/projected/7f381671-eec3-43de-9c55-d0ee249fc54c-kube-api-access-c7qp8\") pod \"7f381671-eec3-43de-9c55-d0ee249fc54c\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.730729 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-scripts\") pod \"7f381671-eec3-43de-9c55-d0ee249fc54c\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.730888 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-config-data\") pod \"7f381671-eec3-43de-9c55-d0ee249fc54c\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.730910 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-combined-ca-bundle\") pod \"7f381671-eec3-43de-9c55-d0ee249fc54c\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.730964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-logs\") pod \"7f381671-eec3-43de-9c55-d0ee249fc54c\" (UID: \"7f381671-eec3-43de-9c55-d0ee249fc54c\") " Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.731097 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7f381671-eec3-43de-9c55-d0ee249fc54c" (UID: "7f381671-eec3-43de-9c55-d0ee249fc54c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.731405 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.731734 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-logs" (OuterVolumeSpecName: "logs") pod "7f381671-eec3-43de-9c55-d0ee249fc54c" (UID: "7f381671-eec3-43de-9c55-d0ee249fc54c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.742714 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-scripts" (OuterVolumeSpecName: "scripts") pod "7f381671-eec3-43de-9c55-d0ee249fc54c" (UID: "7f381671-eec3-43de-9c55-d0ee249fc54c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.756957 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f381671-eec3-43de-9c55-d0ee249fc54c-kube-api-access-c7qp8" (OuterVolumeSpecName: "kube-api-access-c7qp8") pod "7f381671-eec3-43de-9c55-d0ee249fc54c" (UID: "7f381671-eec3-43de-9c55-d0ee249fc54c"). InnerVolumeSpecName "kube-api-access-c7qp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.768388 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f381671-eec3-43de-9c55-d0ee249fc54c" (UID: "7f381671-eec3-43de-9c55-d0ee249fc54c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.793178 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-config-data" (OuterVolumeSpecName: "config-data") pod "7f381671-eec3-43de-9c55-d0ee249fc54c" (UID: "7f381671-eec3-43de-9c55-d0ee249fc54c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.845260 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.845318 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.845332 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f381671-eec3-43de-9c55-d0ee249fc54c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.845340 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7qp8\" (UniqueName: \"kubernetes.io/projected/7f381671-eec3-43de-9c55-d0ee249fc54c-kube-api-access-c7qp8\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.845350 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f381671-eec3-43de-9c55-d0ee249fc54c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:15 crc kubenswrapper[4861]: I0219 14:46:15.912973 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.047835 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-logs\") pod \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.047956 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-scripts\") pod \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.047998 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wktd7\" (UniqueName: \"kubernetes.io/projected/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-kube-api-access-wktd7\") pod \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.048046 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-httpd-run\") pod \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.048089 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-combined-ca-bundle\") pod \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.048106 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-config-data\") pod \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\" (UID: \"8fd48119-143a-4fa3-89ce-31c8a4ece5e9\") " Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.048676 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8fd48119-143a-4fa3-89ce-31c8a4ece5e9" (UID: "8fd48119-143a-4fa3-89ce-31c8a4ece5e9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.048752 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-logs" (OuterVolumeSpecName: "logs") pod "8fd48119-143a-4fa3-89ce-31c8a4ece5e9" (UID: "8fd48119-143a-4fa3-89ce-31c8a4ece5e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.049623 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.049643 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.051871 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-kube-api-access-wktd7" (OuterVolumeSpecName: "kube-api-access-wktd7") pod "8fd48119-143a-4fa3-89ce-31c8a4ece5e9" (UID: "8fd48119-143a-4fa3-89ce-31c8a4ece5e9"). InnerVolumeSpecName "kube-api-access-wktd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.055627 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-scripts" (OuterVolumeSpecName: "scripts") pod "8fd48119-143a-4fa3-89ce-31c8a4ece5e9" (UID: "8fd48119-143a-4fa3-89ce-31c8a4ece5e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.073798 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fd48119-143a-4fa3-89ce-31c8a4ece5e9" (UID: "8fd48119-143a-4fa3-89ce-31c8a4ece5e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.095470 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-config-data" (OuterVolumeSpecName: "config-data") pod "8fd48119-143a-4fa3-89ce-31c8a4ece5e9" (UID: "8fd48119-143a-4fa3-89ce-31c8a4ece5e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.128817 4861 generic.go:334] "Generic (PLEG): container finished" podID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerID="fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00" exitCode=0 Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.131553 4861 generic.go:334] "Generic (PLEG): container finished" podID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerID="a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387" exitCode=143 Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.129019 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd48119-143a-4fa3-89ce-31c8a4ece5e9","Type":"ContainerDied","Data":"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00"} Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.129096 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.131864 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd48119-143a-4fa3-89ce-31c8a4ece5e9","Type":"ContainerDied","Data":"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387"} Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.132027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd48119-143a-4fa3-89ce-31c8a4ece5e9","Type":"ContainerDied","Data":"d52cc9c3d02c548e4ec1765aaf5a4eeee3f3a763f5a3c9079331a2d467ea12ea"} Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.132143 4861 scope.go:117] "RemoveContainer" containerID="fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.137347 4861 generic.go:334] "Generic (PLEG): container finished" podID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerID="1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408" exitCode=143 Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.137375 4861 generic.go:334] "Generic (PLEG): container finished" podID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerID="2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b" exitCode=143 Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.137498 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.137531 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f381671-eec3-43de-9c55-d0ee249fc54c","Type":"ContainerDied","Data":"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408"} Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.137560 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f381671-eec3-43de-9c55-d0ee249fc54c","Type":"ContainerDied","Data":"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b"} Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.137572 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f381671-eec3-43de-9c55-d0ee249fc54c","Type":"ContainerDied","Data":"a3b44ed1ad10139cf9dd956d97b122a2e4e60129c57067b0961dbe83846bce63"} Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.152073 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.152442 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.152524 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.152592 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wktd7\" (UniqueName: \"kubernetes.io/projected/8fd48119-143a-4fa3-89ce-31c8a4ece5e9-kube-api-access-wktd7\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.164614 4861 scope.go:117] "RemoveContainer" containerID="a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.174009 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.185155 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.191850 4861 scope.go:117] "RemoveContainer" containerID="fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.193794 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00\": container with ID starting with fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00 not found: ID does not exist" containerID="fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.193935 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00"} err="failed to get container status \"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00\": rpc error: code = NotFound desc = could not find container \"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00\": container with ID starting with fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00 not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.194032 4861 scope.go:117] "RemoveContainer" containerID="a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.195213 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387\": container with ID starting with a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387 not found: ID does not exist" containerID="a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.195262 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387"} err="failed to get container status \"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387\": rpc error: code = NotFound desc = could not find container \"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387\": container with ID starting with a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387 not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.195290 4861 scope.go:117] "RemoveContainer" containerID="fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.196165 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00"} err="failed to get container status \"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00\": rpc error: code = NotFound desc = could not find container \"fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00\": container with ID starting with fd1f2ff6ec5b9f79bcabdab6ef9df0abb13f292f4f437dcc5067bc7439fc8d00 not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.196189 4861 scope.go:117] "RemoveContainer" containerID="a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.196550 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387"} err="failed to get container status \"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387\": rpc error: code = NotFound desc = could not find container \"a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387\": container with ID starting with a9db9713448423ef32230ce4ec752c69bc041d682f2dd4722f781109eb080387 not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.196601 4861 scope.go:117] "RemoveContainer" containerID="1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.207018 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.217599 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.218606 4861 scope.go:117] "RemoveContainer" containerID="2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.235475 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.235847 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-httpd" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.235859 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-httpd" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.235868 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-httpd" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.235874 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-httpd" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.235892 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-log" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.235898 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-log" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.235906 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-log" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.235911 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-log" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.236056 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-log" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.236068 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-httpd" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.236076 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" containerName="glance-log" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.236096 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" containerName="glance-httpd" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.237030 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.239918 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.240157 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t7qfz" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.242362 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.243262 4861 scope.go:117] "RemoveContainer" containerID="1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.244389 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408\": container with ID starting with 1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408 not found: ID does not exist" containerID="1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.244600 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408"} err="failed to get container status \"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408\": rpc error: code = NotFound desc = could not find container \"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408\": container with ID starting with 1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408 not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.244629 4861 scope.go:117] "RemoveContainer" containerID="2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b" Feb 19 14:46:16 crc kubenswrapper[4861]: E0219 14:46:16.245142 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b\": container with ID starting with 2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b not found: ID does not exist" containerID="2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.245169 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b"} err="failed to get container status \"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b\": rpc error: code = NotFound desc = could not find container \"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b\": container with ID starting with 2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.245184 4861 scope.go:117] "RemoveContainer" containerID="1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.245533 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408"} err="failed to get container status \"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408\": rpc error: code = NotFound desc = could not find container \"1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408\": container with ID starting with 1f47b672eb7f082d09fdda28aa5a58be28c03c22b6350e9afe4361b8933c2408 not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.245553 4861 scope.go:117] "RemoveContainer" containerID="2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.245790 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b"} err="failed to get container status \"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b\": rpc error: code = NotFound desc = could not find container \"2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b\": container with ID starting with 2bf8c83f55153153c89647a36b917955919f2236b3796a47ee5f641a84d6806b not found: ID does not exist" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.246472 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.248005 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.250229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.252135 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.252246 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.270102 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.277187 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.355967 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356102 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356195 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-config-data\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356227 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356259 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-logs\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-scripts\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356375 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnj7\" (UniqueName: \"kubernetes.io/projected/021ec6a5-260b-477c-93e4-34bfaf2fc552-kube-api-access-4qnj7\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356461 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356515 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxx2\" (UniqueName: \"kubernetes.io/projected/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-kube-api-access-fvxx2\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-scripts\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356563 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356612 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-config-data\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356772 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-logs\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.356874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-config-data\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458757 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458790 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-logs\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458824 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-scripts\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458852 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnj7\" (UniqueName: \"kubernetes.io/projected/021ec6a5-260b-477c-93e4-34bfaf2fc552-kube-api-access-4qnj7\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458892 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458927 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxx2\" (UniqueName: \"kubernetes.io/projected/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-kube-api-access-fvxx2\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-scripts\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458980 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.458999 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-config-data\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.459025 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-logs\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.459043 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.459074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.459125 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.459565 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-logs\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.460929 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-logs\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.461248 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.462955 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.463062 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.463071 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-scripts\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.464377 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.463175 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-config-data\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.465767 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-config-data\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.466105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.466190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-scripts\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.471412 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.479471 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnj7\" (UniqueName: \"kubernetes.io/projected/021ec6a5-260b-477c-93e4-34bfaf2fc552-kube-api-access-4qnj7\") pod \"glance-default-internal-api-0\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.483546 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxx2\" (UniqueName: \"kubernetes.io/projected/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-kube-api-access-fvxx2\") pod \"glance-default-external-api-0\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " pod="openstack/glance-default-external-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.554327 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:16 crc kubenswrapper[4861]: I0219 14:46:16.570054 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:46:17 crc kubenswrapper[4861]: W0219 14:46:17.192400 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod021ec6a5_260b_477c_93e4_34bfaf2fc552.slice/crio-7d406ee8a2f950e0caa564c493a07feb0cab600048f9f6095acf665618f224e2 WatchSource:0}: Error finding container 7d406ee8a2f950e0caa564c493a07feb0cab600048f9f6095acf665618f224e2: Status 404 returned error can't find the container with id 7d406ee8a2f950e0caa564c493a07feb0cab600048f9f6095acf665618f224e2 Feb 19 14:46:17 crc kubenswrapper[4861]: I0219 14:46:17.211646 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:46:17 crc kubenswrapper[4861]: I0219 14:46:17.277675 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:46:17 crc kubenswrapper[4861]: W0219 14:46:17.282593 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cf07c1e_8241_487d_99bb_6e4ae9d8cf49.slice/crio-2c190b311a0933bc1aca58c5989b54fb9723a1b92167a26e7816f63a5be4f28a WatchSource:0}: Error finding container 2c190b311a0933bc1aca58c5989b54fb9723a1b92167a26e7816f63a5be4f28a: Status 404 returned error can't find the container with id 2c190b311a0933bc1aca58c5989b54fb9723a1b92167a26e7816f63a5be4f28a Feb 19 14:46:17 crc kubenswrapper[4861]: I0219 14:46:17.992814 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f381671-eec3-43de-9c55-d0ee249fc54c" path="/var/lib/kubelet/pods/7f381671-eec3-43de-9c55-d0ee249fc54c/volumes" Feb 19 14:46:17 crc kubenswrapper[4861]: I0219 14:46:17.993931 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd48119-143a-4fa3-89ce-31c8a4ece5e9" path="/var/lib/kubelet/pods/8fd48119-143a-4fa3-89ce-31c8a4ece5e9/volumes" Feb 19 14:46:18 crc kubenswrapper[4861]: I0219 14:46:18.170669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"021ec6a5-260b-477c-93e4-34bfaf2fc552","Type":"ContainerStarted","Data":"f61f82d84740657f7504dc964d2bc873d6cfe36619885f4f394b61252e48278f"} Feb 19 14:46:18 crc kubenswrapper[4861]: I0219 14:46:18.170711 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"021ec6a5-260b-477c-93e4-34bfaf2fc552","Type":"ContainerStarted","Data":"7d406ee8a2f950e0caa564c493a07feb0cab600048f9f6095acf665618f224e2"} Feb 19 14:46:18 crc kubenswrapper[4861]: I0219 14:46:18.176682 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49","Type":"ContainerStarted","Data":"05ac8e3ded612160ff753b5e1d6f51b9868a7db4c6ce3e6132674a1bf7f7bb3f"} Feb 19 14:46:18 crc kubenswrapper[4861]: I0219 14:46:18.176711 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49","Type":"ContainerStarted","Data":"2c190b311a0933bc1aca58c5989b54fb9723a1b92167a26e7816f63a5be4f28a"} Feb 19 14:46:19 crc kubenswrapper[4861]: I0219 14:46:19.190196 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49","Type":"ContainerStarted","Data":"3abef79a76ef88b984f328555284e34f3b9e552293c9b1184be99ecfdd4fc45f"} Feb 19 14:46:19 crc kubenswrapper[4861]: I0219 14:46:19.194569 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"021ec6a5-260b-477c-93e4-34bfaf2fc552","Type":"ContainerStarted","Data":"26750336b1400e71f39efa747560ad7e2d3a1a6aee23d9c2d75663821e9b5571"} Feb 19 14:46:19 crc kubenswrapper[4861]: I0219 14:46:19.225399 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.225371666 podStartE2EDuration="3.225371666s" podCreationTimestamp="2026-02-19 14:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:19.209063909 +0000 UTC m=+5793.870167137" watchObservedRunningTime="2026-02-19 14:46:19.225371666 +0000 UTC m=+5793.886474894" Feb 19 14:46:19 crc kubenswrapper[4861]: I0219 14:46:19.256781 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.256755158 podStartE2EDuration="3.256755158s" podCreationTimestamp="2026-02-19 14:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:19.246526363 +0000 UTC m=+5793.907629591" watchObservedRunningTime="2026-02-19 14:46:19.256755158 +0000 UTC m=+5793.917858396" Feb 19 14:46:22 crc kubenswrapper[4861]: I0219 14:46:22.745750 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:22 crc kubenswrapper[4861]: I0219 14:46:22.859737 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c6dfd666f-qdfhh"] Feb 19 14:46:22 crc kubenswrapper[4861]: I0219 14:46:22.860131 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" podUID="29d3d5f0-231b-4430-856f-569740c64481" containerName="dnsmasq-dns" containerID="cri-o://e1e0fe80cdcfa0c0b4daf679883d67cc4ce186f1a250fb2626530e525320409b" gracePeriod=10 Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.247013 4861 generic.go:334] "Generic (PLEG): container finished" podID="29d3d5f0-231b-4430-856f-569740c64481" containerID="e1e0fe80cdcfa0c0b4daf679883d67cc4ce186f1a250fb2626530e525320409b" exitCode=0 Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.247075 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" event={"ID":"29d3d5f0-231b-4430-856f-569740c64481","Type":"ContainerDied","Data":"e1e0fe80cdcfa0c0b4daf679883d67cc4ce186f1a250fb2626530e525320409b"} Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.374550 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.514520 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-nb\") pod \"29d3d5f0-231b-4430-856f-569740c64481\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.514584 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-config\") pod \"29d3d5f0-231b-4430-856f-569740c64481\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.514734 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqvg5\" (UniqueName: \"kubernetes.io/projected/29d3d5f0-231b-4430-856f-569740c64481-kube-api-access-sqvg5\") pod \"29d3d5f0-231b-4430-856f-569740c64481\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.514831 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-dns-svc\") pod \"29d3d5f0-231b-4430-856f-569740c64481\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.514972 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-sb\") pod \"29d3d5f0-231b-4430-856f-569740c64481\" (UID: \"29d3d5f0-231b-4430-856f-569740c64481\") " Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.521631 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d3d5f0-231b-4430-856f-569740c64481-kube-api-access-sqvg5" (OuterVolumeSpecName: "kube-api-access-sqvg5") pod "29d3d5f0-231b-4430-856f-569740c64481" (UID: "29d3d5f0-231b-4430-856f-569740c64481"). InnerVolumeSpecName "kube-api-access-sqvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.564460 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29d3d5f0-231b-4430-856f-569740c64481" (UID: "29d3d5f0-231b-4430-856f-569740c64481"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.570549 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29d3d5f0-231b-4430-856f-569740c64481" (UID: "29d3d5f0-231b-4430-856f-569740c64481"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.581253 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-config" (OuterVolumeSpecName: "config") pod "29d3d5f0-231b-4430-856f-569740c64481" (UID: "29d3d5f0-231b-4430-856f-569740c64481"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.585407 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29d3d5f0-231b-4430-856f-569740c64481" (UID: "29d3d5f0-231b-4430-856f-569740c64481"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.617130 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqvg5\" (UniqueName: \"kubernetes.io/projected/29d3d5f0-231b-4430-856f-569740c64481-kube-api-access-sqvg5\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.617164 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.617175 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.617184 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:23 crc kubenswrapper[4861]: I0219 14:46:23.617193 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3d5f0-231b-4430-856f-569740c64481-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:24 crc kubenswrapper[4861]: I0219 14:46:24.268400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" event={"ID":"29d3d5f0-231b-4430-856f-569740c64481","Type":"ContainerDied","Data":"805229783d986ed375d04f13ef9113c42097c0d7a1e07789d1f20e6fe42f379b"} Feb 19 14:46:24 crc kubenswrapper[4861]: I0219 14:46:24.268500 4861 scope.go:117] "RemoveContainer" containerID="e1e0fe80cdcfa0c0b4daf679883d67cc4ce186f1a250fb2626530e525320409b" Feb 19 14:46:24 crc kubenswrapper[4861]: I0219 14:46:24.268674 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6dfd666f-qdfhh" Feb 19 14:46:24 crc kubenswrapper[4861]: I0219 14:46:24.312669 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c6dfd666f-qdfhh"] Feb 19 14:46:24 crc kubenswrapper[4861]: I0219 14:46:24.321000 4861 scope.go:117] "RemoveContainer" containerID="5acb5779a78067664b8cd17e1eb481e23907da7df95ca405e248cdfd7de9aa45" Feb 19 14:46:24 crc kubenswrapper[4861]: I0219 14:46:24.324246 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c6dfd666f-qdfhh"] Feb 19 14:46:25 crc kubenswrapper[4861]: I0219 14:46:25.996963 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d3d5f0-231b-4430-856f-569740c64481" path="/var/lib/kubelet/pods/29d3d5f0-231b-4430-856f-569740c64481/volumes" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.555240 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.555321 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.570367 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.570487 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.608893 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.629172 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.634643 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:26 crc kubenswrapper[4861]: I0219 14:46:26.660297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 14:46:27 crc kubenswrapper[4861]: I0219 14:46:27.300345 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:27 crc kubenswrapper[4861]: I0219 14:46:27.300767 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:27 crc kubenswrapper[4861]: I0219 14:46:27.300787 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 14:46:27 crc kubenswrapper[4861]: I0219 14:46:27.300806 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 14:46:29 crc kubenswrapper[4861]: I0219 14:46:29.066236 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 14:46:29 crc kubenswrapper[4861]: I0219 14:46:29.068664 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 14:46:29 crc kubenswrapper[4861]: I0219 14:46:29.122853 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:29 crc kubenswrapper[4861]: I0219 14:46:29.126885 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 14:46:33 crc kubenswrapper[4861]: I0219 14:46:33.835944 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:46:33 crc kubenswrapper[4861]: I0219 14:46:33.836762 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:46:33 crc kubenswrapper[4861]: I0219 14:46:33.836836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:46:33 crc kubenswrapper[4861]: I0219 14:46:33.838156 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"857acc905a3020c924da1c5bc09451d3afdf4f6b0afc35920779725d181fa1fb"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:46:33 crc kubenswrapper[4861]: I0219 14:46:33.838268 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://857acc905a3020c924da1c5bc09451d3afdf4f6b0afc35920779725d181fa1fb" gracePeriod=600 Feb 19 14:46:34 crc kubenswrapper[4861]: I0219 14:46:34.380670 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="857acc905a3020c924da1c5bc09451d3afdf4f6b0afc35920779725d181fa1fb" exitCode=0 Feb 19 14:46:34 crc kubenswrapper[4861]: I0219 14:46:34.380721 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"857acc905a3020c924da1c5bc09451d3afdf4f6b0afc35920779725d181fa1fb"} Feb 19 14:46:34 crc kubenswrapper[4861]: I0219 14:46:34.380753 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d"} Feb 19 14:46:34 crc kubenswrapper[4861]: I0219 14:46:34.380774 4861 scope.go:117] "RemoveContainer" containerID="55684c363f793a2ee4b1d80659dbf54faf32701883e18291ec4e32c294ea8bad" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.532300 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-q8b9z"] Feb 19 14:46:35 crc kubenswrapper[4861]: E0219 14:46:35.533347 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3d5f0-231b-4430-856f-569740c64481" containerName="dnsmasq-dns" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.533382 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3d5f0-231b-4430-856f-569740c64481" containerName="dnsmasq-dns" Feb 19 14:46:35 crc kubenswrapper[4861]: E0219 14:46:35.533464 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3d5f0-231b-4430-856f-569740c64481" containerName="init" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.533475 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3d5f0-231b-4430-856f-569740c64481" containerName="init" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.533726 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3d5f0-231b-4430-856f-569740c64481" containerName="dnsmasq-dns" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.534647 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.543777 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q8b9z"] Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.648923 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d2ba-account-create-update-xsv6m"] Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.650089 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.652510 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.664344 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d2ba-account-create-update-xsv6m"] Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.703904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrprg\" (UniqueName: \"kubernetes.io/projected/66617a12-1744-44b8-bc56-702ecc53122c-kube-api-access-lrprg\") pod \"placement-db-create-q8b9z\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.703968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66617a12-1744-44b8-bc56-702ecc53122c-operator-scripts\") pod \"placement-db-create-q8b9z\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.805298 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66617a12-1744-44b8-bc56-702ecc53122c-operator-scripts\") pod \"placement-db-create-q8b9z\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.805453 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhbz\" (UniqueName: \"kubernetes.io/projected/1abc723a-e0b6-4eb5-b944-cc167338e911-kube-api-access-lxhbz\") pod \"placement-d2ba-account-create-update-xsv6m\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.805486 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1abc723a-e0b6-4eb5-b944-cc167338e911-operator-scripts\") pod \"placement-d2ba-account-create-update-xsv6m\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.805513 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrprg\" (UniqueName: \"kubernetes.io/projected/66617a12-1744-44b8-bc56-702ecc53122c-kube-api-access-lrprg\") pod \"placement-db-create-q8b9z\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.806242 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66617a12-1744-44b8-bc56-702ecc53122c-operator-scripts\") pod \"placement-db-create-q8b9z\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.831406 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrprg\" (UniqueName: \"kubernetes.io/projected/66617a12-1744-44b8-bc56-702ecc53122c-kube-api-access-lrprg\") pod \"placement-db-create-q8b9z\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.862313 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.907240 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhbz\" (UniqueName: \"kubernetes.io/projected/1abc723a-e0b6-4eb5-b944-cc167338e911-kube-api-access-lxhbz\") pod \"placement-d2ba-account-create-update-xsv6m\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.907294 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1abc723a-e0b6-4eb5-b944-cc167338e911-operator-scripts\") pod \"placement-d2ba-account-create-update-xsv6m\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.908693 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1abc723a-e0b6-4eb5-b944-cc167338e911-operator-scripts\") pod \"placement-d2ba-account-create-update-xsv6m\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.926856 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhbz\" (UniqueName: \"kubernetes.io/projected/1abc723a-e0b6-4eb5-b944-cc167338e911-kube-api-access-lxhbz\") pod \"placement-d2ba-account-create-update-xsv6m\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:35 crc kubenswrapper[4861]: I0219 14:46:35.969345 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:36 crc kubenswrapper[4861]: I0219 14:46:36.319051 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q8b9z"] Feb 19 14:46:36 crc kubenswrapper[4861]: I0219 14:46:36.405340 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q8b9z" event={"ID":"66617a12-1744-44b8-bc56-702ecc53122c","Type":"ContainerStarted","Data":"4be644ce68468b5155881d3f5c69b525cc09915da704b5c4aa9e20427ab15dda"} Feb 19 14:46:36 crc kubenswrapper[4861]: I0219 14:46:36.453284 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d2ba-account-create-update-xsv6m"] Feb 19 14:46:36 crc kubenswrapper[4861]: W0219 14:46:36.456433 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1abc723a_e0b6_4eb5_b944_cc167338e911.slice/crio-018e01413b401e8eaa7d94c3d3c9afd319d34a65d1e609b065a01a1c00866869 WatchSource:0}: Error finding container 018e01413b401e8eaa7d94c3d3c9afd319d34a65d1e609b065a01a1c00866869: Status 404 returned error can't find the container with id 018e01413b401e8eaa7d94c3d3c9afd319d34a65d1e609b065a01a1c00866869 Feb 19 14:46:37 crc kubenswrapper[4861]: I0219 14:46:37.420214 4861 generic.go:334] "Generic (PLEG): container finished" podID="66617a12-1744-44b8-bc56-702ecc53122c" containerID="f0bc0f24589325250008866d3017a725b3dc3a72ac3f243e57485b921fa15fa8" exitCode=0 Feb 19 14:46:37 crc kubenswrapper[4861]: I0219 14:46:37.420866 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q8b9z" event={"ID":"66617a12-1744-44b8-bc56-702ecc53122c","Type":"ContainerDied","Data":"f0bc0f24589325250008866d3017a725b3dc3a72ac3f243e57485b921fa15fa8"} Feb 19 14:46:37 crc kubenswrapper[4861]: I0219 14:46:37.423211 4861 generic.go:334] "Generic (PLEG): container finished" podID="1abc723a-e0b6-4eb5-b944-cc167338e911" containerID="6c0806e58528f39d4fc64c9309b66d305bdefb0834913bc114b50d599df1a02c" exitCode=0 Feb 19 14:46:37 crc kubenswrapper[4861]: I0219 14:46:37.423291 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2ba-account-create-update-xsv6m" event={"ID":"1abc723a-e0b6-4eb5-b944-cc167338e911","Type":"ContainerDied","Data":"6c0806e58528f39d4fc64c9309b66d305bdefb0834913bc114b50d599df1a02c"} Feb 19 14:46:37 crc kubenswrapper[4861]: I0219 14:46:37.423335 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2ba-account-create-update-xsv6m" event={"ID":"1abc723a-e0b6-4eb5-b944-cc167338e911","Type":"ContainerStarted","Data":"018e01413b401e8eaa7d94c3d3c9afd319d34a65d1e609b065a01a1c00866869"} Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.894762 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.899249 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.984157 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxhbz\" (UniqueName: \"kubernetes.io/projected/1abc723a-e0b6-4eb5-b944-cc167338e911-kube-api-access-lxhbz\") pod \"1abc723a-e0b6-4eb5-b944-cc167338e911\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.984198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrprg\" (UniqueName: \"kubernetes.io/projected/66617a12-1744-44b8-bc56-702ecc53122c-kube-api-access-lrprg\") pod \"66617a12-1744-44b8-bc56-702ecc53122c\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.984283 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66617a12-1744-44b8-bc56-702ecc53122c-operator-scripts\") pod \"66617a12-1744-44b8-bc56-702ecc53122c\" (UID: \"66617a12-1744-44b8-bc56-702ecc53122c\") " Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.984342 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1abc723a-e0b6-4eb5-b944-cc167338e911-operator-scripts\") pod \"1abc723a-e0b6-4eb5-b944-cc167338e911\" (UID: \"1abc723a-e0b6-4eb5-b944-cc167338e911\") " Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.985055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66617a12-1744-44b8-bc56-702ecc53122c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66617a12-1744-44b8-bc56-702ecc53122c" (UID: "66617a12-1744-44b8-bc56-702ecc53122c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:38 crc kubenswrapper[4861]: I0219 14:46:38.985105 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1abc723a-e0b6-4eb5-b944-cc167338e911-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1abc723a-e0b6-4eb5-b944-cc167338e911" (UID: "1abc723a-e0b6-4eb5-b944-cc167338e911"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.027300 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66617a12-1744-44b8-bc56-702ecc53122c-kube-api-access-lrprg" (OuterVolumeSpecName: "kube-api-access-lrprg") pod "66617a12-1744-44b8-bc56-702ecc53122c" (UID: "66617a12-1744-44b8-bc56-702ecc53122c"). InnerVolumeSpecName "kube-api-access-lrprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.032376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abc723a-e0b6-4eb5-b944-cc167338e911-kube-api-access-lxhbz" (OuterVolumeSpecName: "kube-api-access-lxhbz") pod "1abc723a-e0b6-4eb5-b944-cc167338e911" (UID: "1abc723a-e0b6-4eb5-b944-cc167338e911"). InnerVolumeSpecName "kube-api-access-lxhbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.086881 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxhbz\" (UniqueName: \"kubernetes.io/projected/1abc723a-e0b6-4eb5-b944-cc167338e911-kube-api-access-lxhbz\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.086951 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrprg\" (UniqueName: \"kubernetes.io/projected/66617a12-1744-44b8-bc56-702ecc53122c-kube-api-access-lrprg\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.086960 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66617a12-1744-44b8-bc56-702ecc53122c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.086968 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1abc723a-e0b6-4eb5-b944-cc167338e911-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.450435 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2ba-account-create-update-xsv6m" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.450436 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2ba-account-create-update-xsv6m" event={"ID":"1abc723a-e0b6-4eb5-b944-cc167338e911","Type":"ContainerDied","Data":"018e01413b401e8eaa7d94c3d3c9afd319d34a65d1e609b065a01a1c00866869"} Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.450629 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018e01413b401e8eaa7d94c3d3c9afd319d34a65d1e609b065a01a1c00866869" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.454547 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q8b9z" event={"ID":"66617a12-1744-44b8-bc56-702ecc53122c","Type":"ContainerDied","Data":"4be644ce68468b5155881d3f5c69b525cc09915da704b5c4aa9e20427ab15dda"} Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.454605 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be644ce68468b5155881d3f5c69b525cc09915da704b5c4aa9e20427ab15dda" Feb 19 14:46:39 crc kubenswrapper[4861]: I0219 14:46:39.454607 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8b9z" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.875905 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hjwnf"] Feb 19 14:46:40 crc kubenswrapper[4861]: E0219 14:46:40.876348 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66617a12-1744-44b8-bc56-702ecc53122c" containerName="mariadb-database-create" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.876363 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="66617a12-1744-44b8-bc56-702ecc53122c" containerName="mariadb-database-create" Feb 19 14:46:40 crc kubenswrapper[4861]: E0219 14:46:40.876387 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abc723a-e0b6-4eb5-b944-cc167338e911" containerName="mariadb-account-create-update" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.876395 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abc723a-e0b6-4eb5-b944-cc167338e911" containerName="mariadb-account-create-update" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.876628 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abc723a-e0b6-4eb5-b944-cc167338e911" containerName="mariadb-account-create-update" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.876646 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="66617a12-1744-44b8-bc56-702ecc53122c" containerName="mariadb-database-create" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.877334 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.882755 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.882864 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lj4kj" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.883406 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.893788 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hjwnf"] Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.916530 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc66d6689-67tls"] Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.918038 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.937589 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-scripts\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.938070 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56d3f883-2c40-4600-9836-d134daae7daf-logs\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.938129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtp6x\" (UniqueName: \"kubernetes.io/projected/56d3f883-2c40-4600-9836-d134daae7daf-kube-api-access-mtp6x\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.938200 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-combined-ca-bundle\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.938277 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-config-data\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:40 crc kubenswrapper[4861]: I0219 14:46:40.948416 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc66d6689-67tls"] Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040253 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtp6x\" (UniqueName: \"kubernetes.io/projected/56d3f883-2c40-4600-9836-d134daae7daf-kube-api-access-mtp6x\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040303 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-nb\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040334 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-sb\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-combined-ca-bundle\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040415 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-config\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040456 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-config-data\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-dns-svc\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040564 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-scripts\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040595 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddc6n\" (UniqueName: \"kubernetes.io/projected/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-kube-api-access-ddc6n\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.040612 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56d3f883-2c40-4600-9836-d134daae7daf-logs\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.041009 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56d3f883-2c40-4600-9836-d134daae7daf-logs\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.052113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-config-data\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.052317 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-combined-ca-bundle\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.062150 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtp6x\" (UniqueName: \"kubernetes.io/projected/56d3f883-2c40-4600-9836-d134daae7daf-kube-api-access-mtp6x\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.063386 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-scripts\") pod \"placement-db-sync-hjwnf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.142118 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddc6n\" (UniqueName: \"kubernetes.io/projected/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-kube-api-access-ddc6n\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.142378 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-nb\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.142408 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-sb\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.142477 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-config\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.142514 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-dns-svc\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.143373 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-dns-svc\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.144096 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-nb\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.144634 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-sb\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.145112 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-config\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.159315 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddc6n\" (UniqueName: \"kubernetes.io/projected/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-kube-api-access-ddc6n\") pod \"dnsmasq-dns-dc66d6689-67tls\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.205075 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.238997 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.748189 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hjwnf"] Feb 19 14:46:41 crc kubenswrapper[4861]: I0219 14:46:41.758452 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc66d6689-67tls"] Feb 19 14:46:41 crc kubenswrapper[4861]: W0219 14:46:41.774165 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f4bbebd_540a_4a3e_a458_e17b5d0b0bec.slice/crio-e53ebdbec7db38a7c786e0395dead5b66bb82a9ecd6cb4edb94578c59c6e6ecd WatchSource:0}: Error finding container e53ebdbec7db38a7c786e0395dead5b66bb82a9ecd6cb4edb94578c59c6e6ecd: Status 404 returned error can't find the container with id e53ebdbec7db38a7c786e0395dead5b66bb82a9ecd6cb4edb94578c59c6e6ecd Feb 19 14:46:42 crc kubenswrapper[4861]: I0219 14:46:42.485746 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hjwnf" event={"ID":"56d3f883-2c40-4600-9836-d134daae7daf","Type":"ContainerStarted","Data":"e49a31cf95250e7adf69edcfabe0d99a7b41230eb3f13934ba4aeb0b17aee2e2"} Feb 19 14:46:42 crc kubenswrapper[4861]: I0219 14:46:42.486036 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hjwnf" event={"ID":"56d3f883-2c40-4600-9836-d134daae7daf","Type":"ContainerStarted","Data":"894f4f1f3cbe6b80726ecc9f58bde9888cd1d3b24e1bb68eaf6d44113db44b00"} Feb 19 14:46:42 crc kubenswrapper[4861]: I0219 14:46:42.490201 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerID="d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7" exitCode=0 Feb 19 14:46:42 crc kubenswrapper[4861]: I0219 14:46:42.490247 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc66d6689-67tls" event={"ID":"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec","Type":"ContainerDied","Data":"d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7"} Feb 19 14:46:42 crc kubenswrapper[4861]: I0219 14:46:42.490277 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc66d6689-67tls" event={"ID":"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec","Type":"ContainerStarted","Data":"e53ebdbec7db38a7c786e0395dead5b66bb82a9ecd6cb4edb94578c59c6e6ecd"} Feb 19 14:46:42 crc kubenswrapper[4861]: I0219 14:46:42.542603 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hjwnf" podStartSLOduration=2.542588078 podStartE2EDuration="2.542588078s" podCreationTimestamp="2026-02-19 14:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:42.518984896 +0000 UTC m=+5817.180088124" watchObservedRunningTime="2026-02-19 14:46:42.542588078 +0000 UTC m=+5817.203691306" Feb 19 14:46:43 crc kubenswrapper[4861]: I0219 14:46:43.504255 4861 generic.go:334] "Generic (PLEG): container finished" podID="56d3f883-2c40-4600-9836-d134daae7daf" containerID="e49a31cf95250e7adf69edcfabe0d99a7b41230eb3f13934ba4aeb0b17aee2e2" exitCode=0 Feb 19 14:46:43 crc kubenswrapper[4861]: I0219 14:46:43.504389 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hjwnf" event={"ID":"56d3f883-2c40-4600-9836-d134daae7daf","Type":"ContainerDied","Data":"e49a31cf95250e7adf69edcfabe0d99a7b41230eb3f13934ba4aeb0b17aee2e2"} Feb 19 14:46:43 crc kubenswrapper[4861]: I0219 14:46:43.509772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc66d6689-67tls" event={"ID":"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec","Type":"ContainerStarted","Data":"3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d"} Feb 19 14:46:43 crc kubenswrapper[4861]: I0219 14:46:43.510163 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:43 crc kubenswrapper[4861]: I0219 14:46:43.553964 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc66d6689-67tls" podStartSLOduration=3.553935588 podStartE2EDuration="3.553935588s" podCreationTimestamp="2026-02-19 14:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:43.545183213 +0000 UTC m=+5818.206286451" watchObservedRunningTime="2026-02-19 14:46:43.553935588 +0000 UTC m=+5818.215038846" Feb 19 14:46:44 crc kubenswrapper[4861]: I0219 14:46:44.909968 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.070691 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56d3f883-2c40-4600-9836-d134daae7daf-logs\") pod \"56d3f883-2c40-4600-9836-d134daae7daf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.070798 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtp6x\" (UniqueName: \"kubernetes.io/projected/56d3f883-2c40-4600-9836-d134daae7daf-kube-api-access-mtp6x\") pod \"56d3f883-2c40-4600-9836-d134daae7daf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.070827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-combined-ca-bundle\") pod \"56d3f883-2c40-4600-9836-d134daae7daf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.070862 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-scripts\") pod \"56d3f883-2c40-4600-9836-d134daae7daf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.070882 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-config-data\") pod \"56d3f883-2c40-4600-9836-d134daae7daf\" (UID: \"56d3f883-2c40-4600-9836-d134daae7daf\") " Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.070963 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d3f883-2c40-4600-9836-d134daae7daf-logs" (OuterVolumeSpecName: "logs") pod "56d3f883-2c40-4600-9836-d134daae7daf" (UID: "56d3f883-2c40-4600-9836-d134daae7daf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.071912 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56d3f883-2c40-4600-9836-d134daae7daf-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.075874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d3f883-2c40-4600-9836-d134daae7daf-kube-api-access-mtp6x" (OuterVolumeSpecName: "kube-api-access-mtp6x") pod "56d3f883-2c40-4600-9836-d134daae7daf" (UID: "56d3f883-2c40-4600-9836-d134daae7daf"). InnerVolumeSpecName "kube-api-access-mtp6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.076606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-scripts" (OuterVolumeSpecName: "scripts") pod "56d3f883-2c40-4600-9836-d134daae7daf" (UID: "56d3f883-2c40-4600-9836-d134daae7daf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.094350 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-config-data" (OuterVolumeSpecName: "config-data") pod "56d3f883-2c40-4600-9836-d134daae7daf" (UID: "56d3f883-2c40-4600-9836-d134daae7daf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.101914 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56d3f883-2c40-4600-9836-d134daae7daf" (UID: "56d3f883-2c40-4600-9836-d134daae7daf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.174592 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtp6x\" (UniqueName: \"kubernetes.io/projected/56d3f883-2c40-4600-9836-d134daae7daf-kube-api-access-mtp6x\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.174623 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.174635 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.174646 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d3f883-2c40-4600-9836-d134daae7daf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.532346 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hjwnf" event={"ID":"56d3f883-2c40-4600-9836-d134daae7daf","Type":"ContainerDied","Data":"894f4f1f3cbe6b80726ecc9f58bde9888cd1d3b24e1bb68eaf6d44113db44b00"} Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.532759 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894f4f1f3cbe6b80726ecc9f58bde9888cd1d3b24e1bb68eaf6d44113db44b00" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.532436 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hjwnf" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.630046 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56bf55ff4b-kwr2d"] Feb 19 14:46:45 crc kubenswrapper[4861]: E0219 14:46:45.630815 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d3f883-2c40-4600-9836-d134daae7daf" containerName="placement-db-sync" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.630847 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d3f883-2c40-4600-9836-d134daae7daf" containerName="placement-db-sync" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.632201 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d3f883-2c40-4600-9836-d134daae7daf" containerName="placement-db-sync" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.637737 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.640877 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.655672 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56bf55ff4b-kwr2d"] Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.665686 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.666035 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.666199 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.666417 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lj4kj" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783354 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-config-data\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783437 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-combined-ca-bundle\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783479 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-internal-tls-certs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783508 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22c9\" (UniqueName: \"kubernetes.io/projected/59856e90-5b0e-49e1-acf6-882fee38a7ab-kube-api-access-l22c9\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783574 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-public-tls-certs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59856e90-5b0e-49e1-acf6-882fee38a7ab-logs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.783650 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-scripts\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-scripts\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885490 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-config-data\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885567 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-combined-ca-bundle\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-internal-tls-certs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885668 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22c9\" (UniqueName: \"kubernetes.io/projected/59856e90-5b0e-49e1-acf6-882fee38a7ab-kube-api-access-l22c9\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885771 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-public-tls-certs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.885855 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59856e90-5b0e-49e1-acf6-882fee38a7ab-logs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.886544 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59856e90-5b0e-49e1-acf6-882fee38a7ab-logs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.891732 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-public-tls-certs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.892088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-config-data\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.893042 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-combined-ca-bundle\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.893552 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-internal-tls-certs\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.894000 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59856e90-5b0e-49e1-acf6-882fee38a7ab-scripts\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.908208 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22c9\" (UniqueName: \"kubernetes.io/projected/59856e90-5b0e-49e1-acf6-882fee38a7ab-kube-api-access-l22c9\") pod \"placement-56bf55ff4b-kwr2d\" (UID: \"59856e90-5b0e-49e1-acf6-882fee38a7ab\") " pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.987495 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lj4kj" Feb 19 14:46:45 crc kubenswrapper[4861]: I0219 14:46:45.995750 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:46 crc kubenswrapper[4861]: I0219 14:46:46.645335 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56bf55ff4b-kwr2d"] Feb 19 14:46:46 crc kubenswrapper[4861]: W0219 14:46:46.648066 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59856e90_5b0e_49e1_acf6_882fee38a7ab.slice/crio-ec08292847192e26693ce217fcc2aa20a97b195be78b75d0ab89a880c227922f WatchSource:0}: Error finding container ec08292847192e26693ce217fcc2aa20a97b195be78b75d0ab89a880c227922f: Status 404 returned error can't find the container with id ec08292847192e26693ce217fcc2aa20a97b195be78b75d0ab89a880c227922f Feb 19 14:46:47 crc kubenswrapper[4861]: I0219 14:46:47.551702 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56bf55ff4b-kwr2d" event={"ID":"59856e90-5b0e-49e1-acf6-882fee38a7ab","Type":"ContainerStarted","Data":"73dc365dd159a79073fbb5aed8747086fd4c555b7f7899c287da731e8f4bc341"} Feb 19 14:46:47 crc kubenswrapper[4861]: I0219 14:46:47.552078 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56bf55ff4b-kwr2d" event={"ID":"59856e90-5b0e-49e1-acf6-882fee38a7ab","Type":"ContainerStarted","Data":"5530ef3ece13d7249831126ae2d95d70fe1668d9b4600a61ee3926125292acdc"} Feb 19 14:46:47 crc kubenswrapper[4861]: I0219 14:46:47.552544 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:47 crc kubenswrapper[4861]: I0219 14:46:47.552603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56bf55ff4b-kwr2d" event={"ID":"59856e90-5b0e-49e1-acf6-882fee38a7ab","Type":"ContainerStarted","Data":"ec08292847192e26693ce217fcc2aa20a97b195be78b75d0ab89a880c227922f"} Feb 19 14:46:47 crc kubenswrapper[4861]: I0219 14:46:47.552643 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.242393 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.279915 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56bf55ff4b-kwr2d" podStartSLOduration=6.279886479 podStartE2EDuration="6.279886479s" podCreationTimestamp="2026-02-19 14:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:46:47.582211256 +0000 UTC m=+5822.243314484" watchObservedRunningTime="2026-02-19 14:46:51.279886479 +0000 UTC m=+5825.940989737" Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.353993 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54cfc9f5fc-d6nbn"] Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.354362 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerName="dnsmasq-dns" containerID="cri-o://1a65cf4711d9589efd701aaf40bbad8372a2d68d401d56d3a9a543aeec0609ef" gracePeriod=10 Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.592671 4861 generic.go:334] "Generic (PLEG): container finished" podID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerID="1a65cf4711d9589efd701aaf40bbad8372a2d68d401d56d3a9a543aeec0609ef" exitCode=0 Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.592758 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" event={"ID":"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3","Type":"ContainerDied","Data":"1a65cf4711d9589efd701aaf40bbad8372a2d68d401d56d3a9a543aeec0609ef"} Feb 19 14:46:51 crc kubenswrapper[4861]: I0219 14:46:51.873072 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.019742 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-nb\") pod \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.019870 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-sb\") pod \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.019912 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh4nr\" (UniqueName: \"kubernetes.io/projected/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-kube-api-access-wh4nr\") pod \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.019975 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-config\") pod \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.020041 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-dns-svc\") pod \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\" (UID: \"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3\") " Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.027456 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-kube-api-access-wh4nr" (OuterVolumeSpecName: "kube-api-access-wh4nr") pod "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" (UID: "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3"). InnerVolumeSpecName "kube-api-access-wh4nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.067580 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" (UID: "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.071427 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" (UID: "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.072649 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" (UID: "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.073211 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-config" (OuterVolumeSpecName: "config") pod "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" (UID: "3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.122030 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.122064 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.122074 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh4nr\" (UniqueName: \"kubernetes.io/projected/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-kube-api-access-wh4nr\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.122090 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.122099 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.610051 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" event={"ID":"3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3","Type":"ContainerDied","Data":"9da8baeb312e95b526c6ac26f2e3d70752f12fa8553fdf0bb09454f3b077a1e8"} Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.610149 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54cfc9f5fc-d6nbn" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.610152 4861 scope.go:117] "RemoveContainer" containerID="1a65cf4711d9589efd701aaf40bbad8372a2d68d401d56d3a9a543aeec0609ef" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.659146 4861 scope.go:117] "RemoveContainer" containerID="e0fa03fcfd189e60661eb3a0dadecdbf3bbaa8b8d2457af8804710563faf36e8" Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.683158 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54cfc9f5fc-d6nbn"] Feb 19 14:46:52 crc kubenswrapper[4861]: I0219 14:46:52.695545 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54cfc9f5fc-d6nbn"] Feb 19 14:46:53 crc kubenswrapper[4861]: I0219 14:46:53.993946 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" path="/var/lib/kubelet/pods/3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3/volumes" Feb 19 14:47:16 crc kubenswrapper[4861]: I0219 14:47:16.979969 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:47:16 crc kubenswrapper[4861]: I0219 14:47:16.994314 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56bf55ff4b-kwr2d" Feb 19 14:47:32 crc kubenswrapper[4861]: E0219 14:47:32.947904 4861 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:54308->38.102.83.177:44601: write tcp 38.102.83.177:54308->38.102.83.177:44601: write: broken pipe Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.375109 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zlhgv"] Feb 19 14:47:38 crc kubenswrapper[4861]: E0219 14:47:38.376045 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerName="dnsmasq-dns" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.376060 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerName="dnsmasq-dns" Feb 19 14:47:38 crc kubenswrapper[4861]: E0219 14:47:38.376079 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerName="init" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.376085 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerName="init" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.376385 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1f013a-c5f2-46c9-acdd-73f3cb7f61b3" containerName="dnsmasq-dns" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.377233 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.390730 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zlhgv"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.448835 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wzrkr"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.450541 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.462757 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-04b6-account-create-update-pw2nv"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.463988 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.473654 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wzrkr"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.474053 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868caf1-998c-44de-b9c6-c3ad464c4f7f-operator-scripts\") pod \"nova-api-db-create-zlhgv\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmn8k\" (UniqueName: \"kubernetes.io/projected/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-kube-api-access-mmn8k\") pod \"nova-cell0-db-create-wzrkr\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482691 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bklx\" (UniqueName: \"kubernetes.io/projected/c831148c-6e67-4340-95f8-4017bc3de758-kube-api-access-4bklx\") pod \"nova-api-04b6-account-create-update-pw2nv\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482735 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v5z\" (UniqueName: \"kubernetes.io/projected/2868caf1-998c-44de-b9c6-c3ad464c4f7f-kube-api-access-p4v5z\") pod \"nova-api-db-create-zlhgv\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482786 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-operator-scripts\") pod \"nova-cell0-db-create-wzrkr\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c831148c-6e67-4340-95f8-4017bc3de758-operator-scripts\") pod \"nova-api-04b6-account-create-update-pw2nv\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.482890 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04b6-account-create-update-pw2nv"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.549512 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-b22hw"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.551385 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.560789 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b22hw"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.584759 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bklx\" (UniqueName: \"kubernetes.io/projected/c831148c-6e67-4340-95f8-4017bc3de758-kube-api-access-4bklx\") pod \"nova-api-04b6-account-create-update-pw2nv\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.584880 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4v5z\" (UniqueName: \"kubernetes.io/projected/2868caf1-998c-44de-b9c6-c3ad464c4f7f-kube-api-access-p4v5z\") pod \"nova-api-db-create-zlhgv\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.584981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-operator-scripts\") pod \"nova-cell0-db-create-wzrkr\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.585021 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c831148c-6e67-4340-95f8-4017bc3de758-operator-scripts\") pod \"nova-api-04b6-account-create-update-pw2nv\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.585062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5mj\" (UniqueName: \"kubernetes.io/projected/4c622f47-8fca-46f0-a55e-665ff2e9525b-kube-api-access-bp5mj\") pod \"nova-cell1-db-create-b22hw\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.585132 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868caf1-998c-44de-b9c6-c3ad464c4f7f-operator-scripts\") pod \"nova-api-db-create-zlhgv\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.585185 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c622f47-8fca-46f0-a55e-665ff2e9525b-operator-scripts\") pod \"nova-cell1-db-create-b22hw\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.585251 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmn8k\" (UniqueName: \"kubernetes.io/projected/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-kube-api-access-mmn8k\") pod \"nova-cell0-db-create-wzrkr\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.586118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c831148c-6e67-4340-95f8-4017bc3de758-operator-scripts\") pod \"nova-api-04b6-account-create-update-pw2nv\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.586129 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868caf1-998c-44de-b9c6-c3ad464c4f7f-operator-scripts\") pod \"nova-api-db-create-zlhgv\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.586588 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-operator-scripts\") pod \"nova-cell0-db-create-wzrkr\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.605495 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bklx\" (UniqueName: \"kubernetes.io/projected/c831148c-6e67-4340-95f8-4017bc3de758-kube-api-access-4bklx\") pod \"nova-api-04b6-account-create-update-pw2nv\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.605506 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmn8k\" (UniqueName: \"kubernetes.io/projected/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-kube-api-access-mmn8k\") pod \"nova-cell0-db-create-wzrkr\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.606988 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4v5z\" (UniqueName: \"kubernetes.io/projected/2868caf1-998c-44de-b9c6-c3ad464c4f7f-kube-api-access-p4v5z\") pod \"nova-api-db-create-zlhgv\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.656892 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9410-account-create-update-hvc27"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.658319 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.659804 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.667171 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9410-account-create-update-hvc27"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.686879 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5mj\" (UniqueName: \"kubernetes.io/projected/4c622f47-8fca-46f0-a55e-665ff2e9525b-kube-api-access-bp5mj\") pod \"nova-cell1-db-create-b22hw\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.686942 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-operator-scripts\") pod \"nova-cell0-9410-account-create-update-hvc27\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.687009 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c622f47-8fca-46f0-a55e-665ff2e9525b-operator-scripts\") pod \"nova-cell1-db-create-b22hw\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.687063 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pjk9\" (UniqueName: \"kubernetes.io/projected/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-kube-api-access-4pjk9\") pod \"nova-cell0-9410-account-create-update-hvc27\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.688353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c622f47-8fca-46f0-a55e-665ff2e9525b-operator-scripts\") pod \"nova-cell1-db-create-b22hw\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.707849 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.709649 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5mj\" (UniqueName: \"kubernetes.io/projected/4c622f47-8fca-46f0-a55e-665ff2e9525b-kube-api-access-bp5mj\") pod \"nova-cell1-db-create-b22hw\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.774698 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.787983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pjk9\" (UniqueName: \"kubernetes.io/projected/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-kube-api-access-4pjk9\") pod \"nova-cell0-9410-account-create-update-hvc27\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.788098 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-operator-scripts\") pod \"nova-cell0-9410-account-create-update-hvc27\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.788765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-operator-scripts\") pod \"nova-cell0-9410-account-create-update-hvc27\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.789384 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.809549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pjk9\" (UniqueName: \"kubernetes.io/projected/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-kube-api-access-4pjk9\") pod \"nova-cell0-9410-account-create-update-hvc27\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.864920 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.872690 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5629-account-create-update-zdltx"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.878405 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.881413 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.893506 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5629-account-create-update-zdltx"] Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.991646 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-operator-scripts\") pod \"nova-cell1-5629-account-create-update-zdltx\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:38 crc kubenswrapper[4861]: I0219 14:47:38.992029 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtmr\" (UniqueName: \"kubernetes.io/projected/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-kube-api-access-rrtmr\") pod \"nova-cell1-5629-account-create-update-zdltx\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.023947 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.094133 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrtmr\" (UniqueName: \"kubernetes.io/projected/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-kube-api-access-rrtmr\") pod \"nova-cell1-5629-account-create-update-zdltx\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.094902 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-operator-scripts\") pod \"nova-cell1-5629-account-create-update-zdltx\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.095790 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-operator-scripts\") pod \"nova-cell1-5629-account-create-update-zdltx\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.119305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrtmr\" (UniqueName: \"kubernetes.io/projected/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-kube-api-access-rrtmr\") pod \"nova-cell1-5629-account-create-update-zdltx\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.159104 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zlhgv"] Feb 19 14:47:39 crc kubenswrapper[4861]: W0219 14:47:39.164082 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2868caf1_998c_44de_b9c6_c3ad464c4f7f.slice/crio-1594091e4eecb5f0696573ee678e087f3e7064af5c3c8353159d3d2855acd82c WatchSource:0}: Error finding container 1594091e4eecb5f0696573ee678e087f3e7064af5c3c8353159d3d2855acd82c: Status 404 returned error can't find the container with id 1594091e4eecb5f0696573ee678e087f3e7064af5c3c8353159d3d2855acd82c Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.202784 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.321660 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wzrkr"] Feb 19 14:47:39 crc kubenswrapper[4861]: W0219 14:47:39.330900 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb92ada7_9b41_44b6_a299_e9a9a2b5f257.slice/crio-03ce85f366a0872d47aca9da8b500f65ebd0614ab4337a7815bdb013dee7e182 WatchSource:0}: Error finding container 03ce85f366a0872d47aca9da8b500f65ebd0614ab4337a7815bdb013dee7e182: Status 404 returned error can't find the container with id 03ce85f366a0872d47aca9da8b500f65ebd0614ab4337a7815bdb013dee7e182 Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.420457 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b22hw"] Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.430997 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04b6-account-create-update-pw2nv"] Feb 19 14:47:39 crc kubenswrapper[4861]: W0219 14:47:39.470557 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c622f47_8fca_46f0_a55e_665ff2e9525b.slice/crio-f8247efcd434c0b42d3fb947589b50d1f2b685855a484a9f5f6e2b3d0c7dba53 WatchSource:0}: Error finding container f8247efcd434c0b42d3fb947589b50d1f2b685855a484a9f5f6e2b3d0c7dba53: Status 404 returned error can't find the container with id f8247efcd434c0b42d3fb947589b50d1f2b685855a484a9f5f6e2b3d0c7dba53 Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.522954 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5629-account-create-update-zdltx"] Feb 19 14:47:39 crc kubenswrapper[4861]: W0219 14:47:39.533905 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55e0f064_3c37_4b96_abc3_46a9bcc19c5a.slice/crio-fdd731a443a02916fe8be5dbebce7e67f464ab97185f5a284b7c4d20e420e77d WatchSource:0}: Error finding container fdd731a443a02916fe8be5dbebce7e67f464ab97185f5a284b7c4d20e420e77d: Status 404 returned error can't find the container with id fdd731a443a02916fe8be5dbebce7e67f464ab97185f5a284b7c4d20e420e77d Feb 19 14:47:39 crc kubenswrapper[4861]: I0219 14:47:39.535064 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9410-account-create-update-hvc27"] Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.179540 4861 generic.go:334] "Generic (PLEG): container finished" podID="2868caf1-998c-44de-b9c6-c3ad464c4f7f" containerID="53f6a4cd431133f720f5a925bb9cfc15f0062624c0a2ea06377924cb031b525f" exitCode=0 Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.179691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zlhgv" event={"ID":"2868caf1-998c-44de-b9c6-c3ad464c4f7f","Type":"ContainerDied","Data":"53f6a4cd431133f720f5a925bb9cfc15f0062624c0a2ea06377924cb031b525f"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.180010 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zlhgv" event={"ID":"2868caf1-998c-44de-b9c6-c3ad464c4f7f","Type":"ContainerStarted","Data":"1594091e4eecb5f0696573ee678e087f3e7064af5c3c8353159d3d2855acd82c"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.183980 4861 generic.go:334] "Generic (PLEG): container finished" podID="eb92ada7-9b41-44b6-a299-e9a9a2b5f257" containerID="25a21e514de4c59f309ee4e71cf3f5b079807df8c823ef1c7005e75af7676730" exitCode=0 Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.184064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wzrkr" event={"ID":"eb92ada7-9b41-44b6-a299-e9a9a2b5f257","Type":"ContainerDied","Data":"25a21e514de4c59f309ee4e71cf3f5b079807df8c823ef1c7005e75af7676730"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.184094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wzrkr" event={"ID":"eb92ada7-9b41-44b6-a299-e9a9a2b5f257","Type":"ContainerStarted","Data":"03ce85f366a0872d47aca9da8b500f65ebd0614ab4337a7815bdb013dee7e182"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.186898 4861 generic.go:334] "Generic (PLEG): container finished" podID="55e0f064-3c37-4b96-abc3-46a9bcc19c5a" containerID="be405e319f9a142fbf13af7e8b162f99857acccd524dd831fe3406dfcc98b031" exitCode=0 Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.187004 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9410-account-create-update-hvc27" event={"ID":"55e0f064-3c37-4b96-abc3-46a9bcc19c5a","Type":"ContainerDied","Data":"be405e319f9a142fbf13af7e8b162f99857acccd524dd831fe3406dfcc98b031"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.187054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9410-account-create-update-hvc27" event={"ID":"55e0f064-3c37-4b96-abc3-46a9bcc19c5a","Type":"ContainerStarted","Data":"fdd731a443a02916fe8be5dbebce7e67f464ab97185f5a284b7c4d20e420e77d"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.190070 4861 generic.go:334] "Generic (PLEG): container finished" podID="4c622f47-8fca-46f0-a55e-665ff2e9525b" containerID="6f9618ef7c6ed44c353388f03201a1a8ea4d929648f01eced70499d618e67fe7" exitCode=0 Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.190160 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b22hw" event={"ID":"4c622f47-8fca-46f0-a55e-665ff2e9525b","Type":"ContainerDied","Data":"6f9618ef7c6ed44c353388f03201a1a8ea4d929648f01eced70499d618e67fe7"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.190191 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b22hw" event={"ID":"4c622f47-8fca-46f0-a55e-665ff2e9525b","Type":"ContainerStarted","Data":"f8247efcd434c0b42d3fb947589b50d1f2b685855a484a9f5f6e2b3d0c7dba53"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.193337 4861 generic.go:334] "Generic (PLEG): container finished" podID="c831148c-6e67-4340-95f8-4017bc3de758" containerID="1d90921feac41dfae182ea825d511dbb5e0875a4f1ec3297a5b3adaf3c41138c" exitCode=0 Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.193454 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04b6-account-create-update-pw2nv" event={"ID":"c831148c-6e67-4340-95f8-4017bc3de758","Type":"ContainerDied","Data":"1d90921feac41dfae182ea825d511dbb5e0875a4f1ec3297a5b3adaf3c41138c"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.193499 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04b6-account-create-update-pw2nv" event={"ID":"c831148c-6e67-4340-95f8-4017bc3de758","Type":"ContainerStarted","Data":"c3637cf88c4682696ee17d3474f28c1ee33c37c26e640f7ed0f2c5133d9e1165"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.196375 4861 generic.go:334] "Generic (PLEG): container finished" podID="3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" containerID="8539c5cee392cf73f961402d608c37234a247147d6905d577616eb54bd55db08" exitCode=0 Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.196489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5629-account-create-update-zdltx" event={"ID":"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037","Type":"ContainerDied","Data":"8539c5cee392cf73f961402d608c37234a247147d6905d577616eb54bd55db08"} Feb 19 14:47:40 crc kubenswrapper[4861]: I0219 14:47:40.196529 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5629-account-create-update-zdltx" event={"ID":"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037","Type":"ContainerStarted","Data":"69f9b10dae875fe8b283b950abc2ee7c7cd218b17a22061e52939ff99b1861d7"} Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.555218 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.647054 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pjk9\" (UniqueName: \"kubernetes.io/projected/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-kube-api-access-4pjk9\") pod \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.647199 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-operator-scripts\") pod \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\" (UID: \"55e0f064-3c37-4b96-abc3-46a9bcc19c5a\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.647727 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55e0f064-3c37-4b96-abc3-46a9bcc19c5a" (UID: "55e0f064-3c37-4b96-abc3-46a9bcc19c5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.647873 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.652680 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-kube-api-access-4pjk9" (OuterVolumeSpecName: "kube-api-access-4pjk9") pod "55e0f064-3c37-4b96-abc3-46a9bcc19c5a" (UID: "55e0f064-3c37-4b96-abc3-46a9bcc19c5a"). InnerVolumeSpecName "kube-api-access-4pjk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.751692 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pjk9\" (UniqueName: \"kubernetes.io/projected/55e0f064-3c37-4b96-abc3-46a9bcc19c5a-kube-api-access-4pjk9\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.828835 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.833175 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.852915 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-operator-scripts\") pod \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.853565 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb92ada7-9b41-44b6-a299-e9a9a2b5f257" (UID: "eb92ada7-9b41-44b6-a299-e9a9a2b5f257"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.853823 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmn8k\" (UniqueName: \"kubernetes.io/projected/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-kube-api-access-mmn8k\") pod \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\" (UID: \"eb92ada7-9b41-44b6-a299-e9a9a2b5f257\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.854032 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4v5z\" (UniqueName: \"kubernetes.io/projected/2868caf1-998c-44de-b9c6-c3ad464c4f7f-kube-api-access-p4v5z\") pod \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.854184 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868caf1-998c-44de-b9c6-c3ad464c4f7f-operator-scripts\") pod \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\" (UID: \"2868caf1-998c-44de-b9c6-c3ad464c4f7f\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.854608 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2868caf1-998c-44de-b9c6-c3ad464c4f7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2868caf1-998c-44de-b9c6-c3ad464c4f7f" (UID: "2868caf1-998c-44de-b9c6-c3ad464c4f7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.854651 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.855149 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868caf1-998c-44de-b9c6-c3ad464c4f7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.855343 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.856670 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-kube-api-access-mmn8k" (OuterVolumeSpecName: "kube-api-access-mmn8k") pod "eb92ada7-9b41-44b6-a299-e9a9a2b5f257" (UID: "eb92ada7-9b41-44b6-a299-e9a9a2b5f257"). InnerVolumeSpecName "kube-api-access-mmn8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.859383 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2868caf1-998c-44de-b9c6-c3ad464c4f7f-kube-api-access-p4v5z" (OuterVolumeSpecName: "kube-api-access-p4v5z") pod "2868caf1-998c-44de-b9c6-c3ad464c4f7f" (UID: "2868caf1-998c-44de-b9c6-c3ad464c4f7f"). InnerVolumeSpecName "kube-api-access-p4v5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.868007 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.870639 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.955809 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5mj\" (UniqueName: \"kubernetes.io/projected/4c622f47-8fca-46f0-a55e-665ff2e9525b-kube-api-access-bp5mj\") pod \"4c622f47-8fca-46f0-a55e-665ff2e9525b\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.955859 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c622f47-8fca-46f0-a55e-665ff2e9525b-operator-scripts\") pod \"4c622f47-8fca-46f0-a55e-665ff2e9525b\" (UID: \"4c622f47-8fca-46f0-a55e-665ff2e9525b\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.955881 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-operator-scripts\") pod \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.955902 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrtmr\" (UniqueName: \"kubernetes.io/projected/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-kube-api-access-rrtmr\") pod \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\" (UID: \"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.955952 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bklx\" (UniqueName: \"kubernetes.io/projected/c831148c-6e67-4340-95f8-4017bc3de758-kube-api-access-4bklx\") pod \"c831148c-6e67-4340-95f8-4017bc3de758\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.955977 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c831148c-6e67-4340-95f8-4017bc3de758-operator-scripts\") pod \"c831148c-6e67-4340-95f8-4017bc3de758\" (UID: \"c831148c-6e67-4340-95f8-4017bc3de758\") " Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.956303 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmn8k\" (UniqueName: \"kubernetes.io/projected/eb92ada7-9b41-44b6-a299-e9a9a2b5f257-kube-api-access-mmn8k\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.956318 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4v5z\" (UniqueName: \"kubernetes.io/projected/2868caf1-998c-44de-b9c6-c3ad464c4f7f-kube-api-access-p4v5z\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.956416 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c622f47-8fca-46f0-a55e-665ff2e9525b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c622f47-8fca-46f0-a55e-665ff2e9525b" (UID: "4c622f47-8fca-46f0-a55e-665ff2e9525b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.956623 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" (UID: "3599e87c-f3bd-48ad-9d62-7cd4bc6d8037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.956643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c831148c-6e67-4340-95f8-4017bc3de758-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c831148c-6e67-4340-95f8-4017bc3de758" (UID: "c831148c-6e67-4340-95f8-4017bc3de758"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.959591 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c622f47-8fca-46f0-a55e-665ff2e9525b-kube-api-access-bp5mj" (OuterVolumeSpecName: "kube-api-access-bp5mj") pod "4c622f47-8fca-46f0-a55e-665ff2e9525b" (UID: "4c622f47-8fca-46f0-a55e-665ff2e9525b"). InnerVolumeSpecName "kube-api-access-bp5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.960207 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c831148c-6e67-4340-95f8-4017bc3de758-kube-api-access-4bklx" (OuterVolumeSpecName: "kube-api-access-4bklx") pod "c831148c-6e67-4340-95f8-4017bc3de758" (UID: "c831148c-6e67-4340-95f8-4017bc3de758"). InnerVolumeSpecName "kube-api-access-4bklx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:41 crc kubenswrapper[4861]: I0219 14:47:41.961542 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-kube-api-access-rrtmr" (OuterVolumeSpecName: "kube-api-access-rrtmr") pod "3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" (UID: "3599e87c-f3bd-48ad-9d62-7cd4bc6d8037"). InnerVolumeSpecName "kube-api-access-rrtmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.058834 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5mj\" (UniqueName: \"kubernetes.io/projected/4c622f47-8fca-46f0-a55e-665ff2e9525b-kube-api-access-bp5mj\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.058883 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c622f47-8fca-46f0-a55e-665ff2e9525b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.058903 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.058924 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrtmr\" (UniqueName: \"kubernetes.io/projected/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037-kube-api-access-rrtmr\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.058941 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bklx\" (UniqueName: \"kubernetes.io/projected/c831148c-6e67-4340-95f8-4017bc3de758-kube-api-access-4bklx\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.058955 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c831148c-6e67-4340-95f8-4017bc3de758-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.219627 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b22hw" event={"ID":"4c622f47-8fca-46f0-a55e-665ff2e9525b","Type":"ContainerDied","Data":"f8247efcd434c0b42d3fb947589b50d1f2b685855a484a9f5f6e2b3d0c7dba53"} Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.219685 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8247efcd434c0b42d3fb947589b50d1f2b685855a484a9f5f6e2b3d0c7dba53" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.219643 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b22hw" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.221871 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04b6-account-create-update-pw2nv" event={"ID":"c831148c-6e67-4340-95f8-4017bc3de758","Type":"ContainerDied","Data":"c3637cf88c4682696ee17d3474f28c1ee33c37c26e640f7ed0f2c5133d9e1165"} Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.221910 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3637cf88c4682696ee17d3474f28c1ee33c37c26e640f7ed0f2c5133d9e1165" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.222398 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04b6-account-create-update-pw2nv" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.223939 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5629-account-create-update-zdltx" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.223969 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5629-account-create-update-zdltx" event={"ID":"3599e87c-f3bd-48ad-9d62-7cd4bc6d8037","Type":"ContainerDied","Data":"69f9b10dae875fe8b283b950abc2ee7c7cd218b17a22061e52939ff99b1861d7"} Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.224130 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f9b10dae875fe8b283b950abc2ee7c7cd218b17a22061e52939ff99b1861d7" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.225646 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zlhgv" event={"ID":"2868caf1-998c-44de-b9c6-c3ad464c4f7f","Type":"ContainerDied","Data":"1594091e4eecb5f0696573ee678e087f3e7064af5c3c8353159d3d2855acd82c"} Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.225677 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1594091e4eecb5f0696573ee678e087f3e7064af5c3c8353159d3d2855acd82c" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.225730 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zlhgv" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.230461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wzrkr" event={"ID":"eb92ada7-9b41-44b6-a299-e9a9a2b5f257","Type":"ContainerDied","Data":"03ce85f366a0872d47aca9da8b500f65ebd0614ab4337a7815bdb013dee7e182"} Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.230491 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03ce85f366a0872d47aca9da8b500f65ebd0614ab4337a7815bdb013dee7e182" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.230564 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wzrkr" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.234127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9410-account-create-update-hvc27" event={"ID":"55e0f064-3c37-4b96-abc3-46a9bcc19c5a","Type":"ContainerDied","Data":"fdd731a443a02916fe8be5dbebce7e67f464ab97185f5a284b7c4d20e420e77d"} Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.234199 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd731a443a02916fe8be5dbebce7e67f464ab97185f5a284b7c4d20e420e77d" Feb 19 14:47:42 crc kubenswrapper[4861]: I0219 14:47:42.234347 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9410-account-create-update-hvc27" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.872471 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-27wrk"] Feb 19 14:47:43 crc kubenswrapper[4861]: E0219 14:47:43.873245 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c831148c-6e67-4340-95f8-4017bc3de758" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873261 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c831148c-6e67-4340-95f8-4017bc3de758" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: E0219 14:47:43.873283 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c622f47-8fca-46f0-a55e-665ff2e9525b" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873292 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c622f47-8fca-46f0-a55e-665ff2e9525b" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: E0219 14:47:43.873305 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e0f064-3c37-4b96-abc3-46a9bcc19c5a" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873313 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e0f064-3c37-4b96-abc3-46a9bcc19c5a" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: E0219 14:47:43.873330 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2868caf1-998c-44de-b9c6-c3ad464c4f7f" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873338 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2868caf1-998c-44de-b9c6-c3ad464c4f7f" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: E0219 14:47:43.873359 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb92ada7-9b41-44b6-a299-e9a9a2b5f257" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873366 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb92ada7-9b41-44b6-a299-e9a9a2b5f257" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: E0219 14:47:43.873382 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873389 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873601 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e0f064-3c37-4b96-abc3-46a9bcc19c5a" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873619 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb92ada7-9b41-44b6-a299-e9a9a2b5f257" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873634 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c831148c-6e67-4340-95f8-4017bc3de758" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873646 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c622f47-8fca-46f0-a55e-665ff2e9525b" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873666 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" containerName="mariadb-account-create-update" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.873679 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2868caf1-998c-44de-b9c6-c3ad464c4f7f" containerName="mariadb-database-create" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.874376 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.875941 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-88hss" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.876402 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.876605 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.888938 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-27wrk"] Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.919211 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ftr\" (UniqueName: \"kubernetes.io/projected/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-kube-api-access-g7ftr\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.919281 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-scripts\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.919404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:43 crc kubenswrapper[4861]: I0219 14:47:43.919536 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-config-data\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.021119 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7ftr\" (UniqueName: \"kubernetes.io/projected/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-kube-api-access-g7ftr\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.021172 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-scripts\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.021235 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.021294 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-config-data\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.044004 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-scripts\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.044104 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.044360 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-config-data\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.052220 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7ftr\" (UniqueName: \"kubernetes.io/projected/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-kube-api-access-g7ftr\") pod \"nova-cell0-conductor-db-sync-27wrk\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.229927 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:44 crc kubenswrapper[4861]: I0219 14:47:44.688130 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-27wrk"] Feb 19 14:47:45 crc kubenswrapper[4861]: I0219 14:47:45.270762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-27wrk" event={"ID":"b9064be2-3d5c-4d2d-88f0-4873a276ebd6","Type":"ContainerStarted","Data":"211bd0f97186a58151749062a4af4ff2be57d07e7bd3f5eda9d106f9045b6876"} Feb 19 14:47:45 crc kubenswrapper[4861]: I0219 14:47:45.271065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-27wrk" event={"ID":"b9064be2-3d5c-4d2d-88f0-4873a276ebd6","Type":"ContainerStarted","Data":"93b252b180cf6472f56a16f835b4d60f65f7cf8c08e7bdd60c3ea77a9af849fc"} Feb 19 14:47:45 crc kubenswrapper[4861]: I0219 14:47:45.290652 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-27wrk" podStartSLOduration=2.290632709 podStartE2EDuration="2.290632709s" podCreationTimestamp="2026-02-19 14:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:47:45.286639112 +0000 UTC m=+5879.947742370" watchObservedRunningTime="2026-02-19 14:47:45.290632709 +0000 UTC m=+5879.951735947" Feb 19 14:47:50 crc kubenswrapper[4861]: I0219 14:47:50.360772 4861 generic.go:334] "Generic (PLEG): container finished" podID="b9064be2-3d5c-4d2d-88f0-4873a276ebd6" containerID="211bd0f97186a58151749062a4af4ff2be57d07e7bd3f5eda9d106f9045b6876" exitCode=0 Feb 19 14:47:50 crc kubenswrapper[4861]: I0219 14:47:50.360861 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-27wrk" event={"ID":"b9064be2-3d5c-4d2d-88f0-4873a276ebd6","Type":"ContainerDied","Data":"211bd0f97186a58151749062a4af4ff2be57d07e7bd3f5eda9d106f9045b6876"} Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.789016 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.877522 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7ftr\" (UniqueName: \"kubernetes.io/projected/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-kube-api-access-g7ftr\") pod \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.877620 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-config-data\") pod \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.877688 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-combined-ca-bundle\") pod \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.877803 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-scripts\") pod \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\" (UID: \"b9064be2-3d5c-4d2d-88f0-4873a276ebd6\") " Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.883032 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-scripts" (OuterVolumeSpecName: "scripts") pod "b9064be2-3d5c-4d2d-88f0-4873a276ebd6" (UID: "b9064be2-3d5c-4d2d-88f0-4873a276ebd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.883451 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-kube-api-access-g7ftr" (OuterVolumeSpecName: "kube-api-access-g7ftr") pod "b9064be2-3d5c-4d2d-88f0-4873a276ebd6" (UID: "b9064be2-3d5c-4d2d-88f0-4873a276ebd6"). InnerVolumeSpecName "kube-api-access-g7ftr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.905748 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-config-data" (OuterVolumeSpecName: "config-data") pod "b9064be2-3d5c-4d2d-88f0-4873a276ebd6" (UID: "b9064be2-3d5c-4d2d-88f0-4873a276ebd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.912330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9064be2-3d5c-4d2d-88f0-4873a276ebd6" (UID: "b9064be2-3d5c-4d2d-88f0-4873a276ebd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.979510 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.979560 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.979579 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7ftr\" (UniqueName: \"kubernetes.io/projected/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-kube-api-access-g7ftr\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:51 crc kubenswrapper[4861]: I0219 14:47:51.979601 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9064be2-3d5c-4d2d-88f0-4873a276ebd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.381380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-27wrk" event={"ID":"b9064be2-3d5c-4d2d-88f0-4873a276ebd6","Type":"ContainerDied","Data":"93b252b180cf6472f56a16f835b4d60f65f7cf8c08e7bdd60c3ea77a9af849fc"} Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.381452 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b252b180cf6472f56a16f835b4d60f65f7cf8c08e7bdd60c3ea77a9af849fc" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.381515 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-27wrk" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.487339 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 14:47:52 crc kubenswrapper[4861]: E0219 14:47:52.487841 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9064be2-3d5c-4d2d-88f0-4873a276ebd6" containerName="nova-cell0-conductor-db-sync" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.487866 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9064be2-3d5c-4d2d-88f0-4873a276ebd6" containerName="nova-cell0-conductor-db-sync" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.488101 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9064be2-3d5c-4d2d-88f0-4873a276ebd6" containerName="nova-cell0-conductor-db-sync" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.488926 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.492152 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.495011 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-88hss" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.514602 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.591961 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbq5\" (UniqueName: \"kubernetes.io/projected/31da7774-969b-45e2-ba07-b6dbdd3a97d7-kube-api-access-hwbq5\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.592024 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.592361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.694729 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbq5\" (UniqueName: \"kubernetes.io/projected/31da7774-969b-45e2-ba07-b6dbdd3a97d7-kube-api-access-hwbq5\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.694834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.695144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.702486 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.707905 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.716478 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbq5\" (UniqueName: \"kubernetes.io/projected/31da7774-969b-45e2-ba07-b6dbdd3a97d7-kube-api-access-hwbq5\") pod \"nova-cell0-conductor-0\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:52 crc kubenswrapper[4861]: I0219 14:47:52.817154 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:53 crc kubenswrapper[4861]: I0219 14:47:53.282225 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 14:47:53 crc kubenswrapper[4861]: I0219 14:47:53.411010 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31da7774-969b-45e2-ba07-b6dbdd3a97d7","Type":"ContainerStarted","Data":"a68c548aae866ae7afd3c3a5f0f0f90298c2538df99e94de3da5a3fc46c35cd3"} Feb 19 14:47:54 crc kubenswrapper[4861]: I0219 14:47:54.434270 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31da7774-969b-45e2-ba07-b6dbdd3a97d7","Type":"ContainerStarted","Data":"5faea9d8129b450ddc3f1ba1a103f983d654d8f56a3e5746dedabffb98051355"} Feb 19 14:47:54 crc kubenswrapper[4861]: I0219 14:47:54.435610 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 14:47:54 crc kubenswrapper[4861]: I0219 14:47:54.465390 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.46536967 podStartE2EDuration="2.46536967s" podCreationTimestamp="2026-02-19 14:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:47:54.455641779 +0000 UTC m=+5889.116745057" watchObservedRunningTime="2026-02-19 14:47:54.46536967 +0000 UTC m=+5889.126472918" Feb 19 14:48:02 crc kubenswrapper[4861]: I0219 14:48:02.854879 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.416692 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wmlzl"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.418249 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.421277 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.422363 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.426340 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wmlzl"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.550255 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.550346 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-scripts\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.550441 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7npzv\" (UniqueName: \"kubernetes.io/projected/fa46daf6-db14-4383-8df7-79bcbc7e8cac-kube-api-access-7npzv\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.550537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-config-data\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.620060 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.621172 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.629137 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.649464 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.655443 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-config-data\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.655579 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.655610 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-scripts\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.655646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7npzv\" (UniqueName: \"kubernetes.io/projected/fa46daf6-db14-4383-8df7-79bcbc7e8cac-kube-api-access-7npzv\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.661670 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.661872 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-config-data\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.684713 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.685825 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.689574 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.690465 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-scripts\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.696872 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7npzv\" (UniqueName: \"kubernetes.io/projected/fa46daf6-db14-4383-8df7-79bcbc7e8cac-kube-api-access-7npzv\") pod \"nova-cell0-cell-mapping-wmlzl\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.713604 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.751822 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.787396 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5m78\" (UniqueName: \"kubernetes.io/projected/c24c234d-91f1-49c0-84a2-1dc1dc08486e-kube-api-access-f5m78\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.787713 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.790446 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-config-data\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.790479 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ntvx\" (UniqueName: \"kubernetes.io/projected/6960b5bc-d1ea-44c3-9c64-df99e679db45-kube-api-access-2ntvx\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.790524 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.790544 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.854486 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.856016 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.858812 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.880017 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.899656 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5m78\" (UniqueName: \"kubernetes.io/projected/c24c234d-91f1-49c0-84a2-1dc1dc08486e-kube-api-access-f5m78\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.899782 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.899803 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-config-data\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.899821 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ntvx\" (UniqueName: \"kubernetes.io/projected/6960b5bc-d1ea-44c3-9c64-df99e679db45-kube-api-access-2ntvx\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.899856 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.899873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.906688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-config-data\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.915455 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.915968 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.916127 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.936104 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.937760 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.938974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5m78\" (UniqueName: \"kubernetes.io/projected/c24c234d-91f1-49c0-84a2-1dc1dc08486e-kube-api-access-f5m78\") pod \"nova-scheduler-0\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.946055 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.953474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ntvx\" (UniqueName: \"kubernetes.io/projected/6960b5bc-d1ea-44c3-9c64-df99e679db45-kube-api-access-2ntvx\") pod \"nova-cell1-novncproxy-0\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.963312 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:03 crc kubenswrapper[4861]: I0219 14:48:03.997207 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.002461 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.012076 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4bc8756f-nmf6c"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023100 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-config-data\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023246 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3859847c-c005-432e-bb70-93463fa68585-logs\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023331 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25hmq\" (UniqueName: \"kubernetes.io/projected/6dd806ed-3e7f-4dda-9746-cfe23435a328-kube-api-access-25hmq\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023464 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023491 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/3859847c-c005-432e-bb70-93463fa68585-kube-api-access-dr57h\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023513 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-config-data\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023528 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dd806ed-3e7f-4dda-9746-cfe23435a328-logs\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.023626 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.031892 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4bc8756f-nmf6c"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.032100 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.124823 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dd806ed-3e7f-4dda-9746-cfe23435a328-logs\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125251 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-config\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-sb\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-config-data\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3859847c-c005-432e-bb70-93463fa68585-logs\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125521 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-dns-svc\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125539 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2jdj\" (UniqueName: \"kubernetes.io/projected/24443d9b-b1c9-4647-8e75-918f50110f68-kube-api-access-q2jdj\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125564 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-nb\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125584 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25hmq\" (UniqueName: \"kubernetes.io/projected/6dd806ed-3e7f-4dda-9746-cfe23435a328-kube-api-access-25hmq\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/3859847c-c005-432e-bb70-93463fa68585-kube-api-access-dr57h\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.125657 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-config-data\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.126749 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3859847c-c005-432e-bb70-93463fa68585-logs\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.127023 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dd806ed-3e7f-4dda-9746-cfe23435a328-logs\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.157620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25hmq\" (UniqueName: \"kubernetes.io/projected/6dd806ed-3e7f-4dda-9746-cfe23435a328-kube-api-access-25hmq\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.157615 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.158200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-config-data\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.159133 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/3859847c-c005-432e-bb70-93463fa68585-kube-api-access-dr57h\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.158749 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-config-data\") pod \"nova-api-0\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.161909 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.228402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-dns-svc\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.228465 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2jdj\" (UniqueName: \"kubernetes.io/projected/24443d9b-b1c9-4647-8e75-918f50110f68-kube-api-access-q2jdj\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.228491 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-nb\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.228548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-config\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.228572 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-sb\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.230189 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-sb\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.230751 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-dns-svc\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.236452 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-config\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.236693 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-nb\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.271685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2jdj\" (UniqueName: \"kubernetes.io/projected/24443d9b-b1c9-4647-8e75-918f50110f68-kube-api-access-q2jdj\") pod \"dnsmasq-dns-c4bc8756f-nmf6c\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.336698 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.360521 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.373758 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.557329 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.743799 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xdc8j"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.745012 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.747231 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.747408 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.759556 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wmlzl"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.775301 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xdc8j"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.819462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6960b5bc-d1ea-44c3-9c64-df99e679db45","Type":"ContainerStarted","Data":"89b134358e398b05c4ec16da9cef286a1d1f11ffea187e36fe7135f5db1d2cac"} Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.830941 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wmlzl" event={"ID":"fa46daf6-db14-4383-8df7-79bcbc7e8cac","Type":"ContainerStarted","Data":"8cb5b014fd2b622a97ab5a3fe1f6918d407394dc7fb691b795d9e46def90520b"} Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.839432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.839474 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-scripts\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.839543 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkzp\" (UniqueName: \"kubernetes.io/projected/5cb8387f-bbdc-4001-af0d-c9aa517e829a-kube-api-access-hkkzp\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.839569 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-config-data\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.903920 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.924971 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.941356 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.941399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-scripts\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.941493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkzp\" (UniqueName: \"kubernetes.io/projected/5cb8387f-bbdc-4001-af0d-c9aa517e829a-kube-api-access-hkkzp\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.941513 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-config-data\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.945180 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-scripts\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.946770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-config-data\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.948550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.954035 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4bc8756f-nmf6c"] Feb 19 14:48:04 crc kubenswrapper[4861]: I0219 14:48:04.959492 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkzp\" (UniqueName: \"kubernetes.io/projected/5cb8387f-bbdc-4001-af0d-c9aa517e829a-kube-api-access-hkkzp\") pod \"nova-cell1-conductor-db-sync-xdc8j\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.035338 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.113955 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:05 crc kubenswrapper[4861]: W0219 14:48:05.119347 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3859847c_c005_432e_bb70_93463fa68585.slice/crio-a3ba98513a3ae9abd5bb6c61662576e793c37b0d8867b6c3ed48d610dcdba1c1 WatchSource:0}: Error finding container a3ba98513a3ae9abd5bb6c61662576e793c37b0d8867b6c3ed48d610dcdba1c1: Status 404 returned error can't find the container with id a3ba98513a3ae9abd5bb6c61662576e793c37b0d8867b6c3ed48d610dcdba1c1 Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.577218 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xdc8j"] Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.848237 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" event={"ID":"5cb8387f-bbdc-4001-af0d-c9aa517e829a","Type":"ContainerStarted","Data":"193c83764bf942ade53aec265d703d3481a196a8905cd90c7cadfc6b28235bb7"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.850123 4861 generic.go:334] "Generic (PLEG): container finished" podID="24443d9b-b1c9-4647-8e75-918f50110f68" containerID="3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb" exitCode=0 Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.850178 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" event={"ID":"24443d9b-b1c9-4647-8e75-918f50110f68","Type":"ContainerDied","Data":"3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.850197 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" event={"ID":"24443d9b-b1c9-4647-8e75-918f50110f68","Type":"ContainerStarted","Data":"ebd64785ce3a8e89fd5dfacdd41fef889548fee1825000a47e7e051b1f2305c5"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.854999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6dd806ed-3e7f-4dda-9746-cfe23435a328","Type":"ContainerStarted","Data":"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.855038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6dd806ed-3e7f-4dda-9746-cfe23435a328","Type":"ContainerStarted","Data":"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.855050 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6dd806ed-3e7f-4dda-9746-cfe23435a328","Type":"ContainerStarted","Data":"b91173680c5396d407cab281f90f58422e01ce07b3879ad772d258adb2edcea6"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.886446 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wmlzl" event={"ID":"fa46daf6-db14-4383-8df7-79bcbc7e8cac","Type":"ContainerStarted","Data":"eea00f8a95bd2e50aebc8f8bacd0f8d43e3b775eb31231554a291504efd96a47"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.893173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c24c234d-91f1-49c0-84a2-1dc1dc08486e","Type":"ContainerStarted","Data":"9c7481924a221096998113353687850376d0e229375e64ceb566fe73aef681b5"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.893238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c24c234d-91f1-49c0-84a2-1dc1dc08486e","Type":"ContainerStarted","Data":"4d373f8d3134556173f090d0d58f49ec79cb77b7cc677e40771d57883bc021d4"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.895204 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3859847c-c005-432e-bb70-93463fa68585","Type":"ContainerStarted","Data":"0cf01e0f3d70417e509f801b80590071bd600bf9c0f315a535bf1c7c0e56e739"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.895225 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3859847c-c005-432e-bb70-93463fa68585","Type":"ContainerStarted","Data":"b297df9d420b80235b145efc9ca172cfc9489f9b1a0297673cc9c18145e69178"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.895235 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3859847c-c005-432e-bb70-93463fa68585","Type":"ContainerStarted","Data":"a3ba98513a3ae9abd5bb6c61662576e793c37b0d8867b6c3ed48d610dcdba1c1"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.903888 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6960b5bc-d1ea-44c3-9c64-df99e679db45","Type":"ContainerStarted","Data":"b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae"} Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.904261 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.904243574 podStartE2EDuration="2.904243574s" podCreationTimestamp="2026-02-19 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:05.902204289 +0000 UTC m=+5900.563307517" watchObservedRunningTime="2026-02-19 14:48:05.904243574 +0000 UTC m=+5900.565346802" Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.924052 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.924034025 podStartE2EDuration="2.924034025s" podCreationTimestamp="2026-02-19 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:05.919877653 +0000 UTC m=+5900.580980881" watchObservedRunningTime="2026-02-19 14:48:05.924034025 +0000 UTC m=+5900.585137253" Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.944247 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.944229966 podStartE2EDuration="2.944229966s" podCreationTimestamp="2026-02-19 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:05.940799354 +0000 UTC m=+5900.601902582" watchObservedRunningTime="2026-02-19 14:48:05.944229966 +0000 UTC m=+5900.605333194" Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.970825 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wmlzl" podStartSLOduration=2.970806358 podStartE2EDuration="2.970806358s" podCreationTimestamp="2026-02-19 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:05.953243318 +0000 UTC m=+5900.614346546" watchObservedRunningTime="2026-02-19 14:48:05.970806358 +0000 UTC m=+5900.631909586" Feb 19 14:48:05 crc kubenswrapper[4861]: I0219 14:48:05.977621 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.977606701 podStartE2EDuration="2.977606701s" podCreationTimestamp="2026-02-19 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:05.97683262 +0000 UTC m=+5900.637935848" watchObservedRunningTime="2026-02-19 14:48:05.977606701 +0000 UTC m=+5900.638709929" Feb 19 14:48:06 crc kubenswrapper[4861]: I0219 14:48:06.912464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" event={"ID":"5cb8387f-bbdc-4001-af0d-c9aa517e829a","Type":"ContainerStarted","Data":"96539a8b84e39cc7e392a1f33a0decc9c5af559d058701e46eaf6d886a719170"} Feb 19 14:48:06 crc kubenswrapper[4861]: I0219 14:48:06.914797 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" event={"ID":"24443d9b-b1c9-4647-8e75-918f50110f68","Type":"ContainerStarted","Data":"e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48"} Feb 19 14:48:06 crc kubenswrapper[4861]: I0219 14:48:06.937795 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" podStartSLOduration=2.937772938 podStartE2EDuration="2.937772938s" podCreationTimestamp="2026-02-19 14:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:06.932157497 +0000 UTC m=+5901.593260725" watchObservedRunningTime="2026-02-19 14:48:06.937772938 +0000 UTC m=+5901.598876186" Feb 19 14:48:06 crc kubenswrapper[4861]: I0219 14:48:06.956099 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" podStartSLOduration=3.956081449 podStartE2EDuration="3.956081449s" podCreationTimestamp="2026-02-19 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:06.950895089 +0000 UTC m=+5901.611998307" watchObservedRunningTime="2026-02-19 14:48:06.956081449 +0000 UTC m=+5901.617184677" Feb 19 14:48:07 crc kubenswrapper[4861]: I0219 14:48:07.924274 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.601705 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.602318 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-metadata" containerID="cri-o://0cf01e0f3d70417e509f801b80590071bd600bf9c0f315a535bf1c7c0e56e739" gracePeriod=30 Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.602338 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-log" containerID="cri-o://b297df9d420b80235b145efc9ca172cfc9489f9b1a0297673cc9c18145e69178" gracePeriod=30 Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.633189 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.633590 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6960b5bc-d1ea-44c3-9c64-df99e679db45" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae" gracePeriod=30 Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.938843 4861 generic.go:334] "Generic (PLEG): container finished" podID="5cb8387f-bbdc-4001-af0d-c9aa517e829a" containerID="96539a8b84e39cc7e392a1f33a0decc9c5af559d058701e46eaf6d886a719170" exitCode=0 Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.938905 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" event={"ID":"5cb8387f-bbdc-4001-af0d-c9aa517e829a","Type":"ContainerDied","Data":"96539a8b84e39cc7e392a1f33a0decc9c5af559d058701e46eaf6d886a719170"} Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.963777 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.964406 4861 generic.go:334] "Generic (PLEG): container finished" podID="3859847c-c005-432e-bb70-93463fa68585" containerID="0cf01e0f3d70417e509f801b80590071bd600bf9c0f315a535bf1c7c0e56e739" exitCode=0 Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.964441 4861 generic.go:334] "Generic (PLEG): container finished" podID="3859847c-c005-432e-bb70-93463fa68585" containerID="b297df9d420b80235b145efc9ca172cfc9489f9b1a0297673cc9c18145e69178" exitCode=143 Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.964468 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3859847c-c005-432e-bb70-93463fa68585","Type":"ContainerDied","Data":"0cf01e0f3d70417e509f801b80590071bd600bf9c0f315a535bf1c7c0e56e739"} Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.964517 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3859847c-c005-432e-bb70-93463fa68585","Type":"ContainerDied","Data":"b297df9d420b80235b145efc9ca172cfc9489f9b1a0297673cc9c18145e69178"} Feb 19 14:48:08 crc kubenswrapper[4861]: I0219 14:48:08.997868 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.203478 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.235641 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/3859847c-c005-432e-bb70-93463fa68585-kube-api-access-dr57h\") pod \"3859847c-c005-432e-bb70-93463fa68585\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.235747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-config-data\") pod \"3859847c-c005-432e-bb70-93463fa68585\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.235769 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-combined-ca-bundle\") pod \"3859847c-c005-432e-bb70-93463fa68585\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.235791 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3859847c-c005-432e-bb70-93463fa68585-logs\") pod \"3859847c-c005-432e-bb70-93463fa68585\" (UID: \"3859847c-c005-432e-bb70-93463fa68585\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.236781 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3859847c-c005-432e-bb70-93463fa68585-logs" (OuterVolumeSpecName: "logs") pod "3859847c-c005-432e-bb70-93463fa68585" (UID: "3859847c-c005-432e-bb70-93463fa68585"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.250119 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3859847c-c005-432e-bb70-93463fa68585-kube-api-access-dr57h" (OuterVolumeSpecName: "kube-api-access-dr57h") pod "3859847c-c005-432e-bb70-93463fa68585" (UID: "3859847c-c005-432e-bb70-93463fa68585"). InnerVolumeSpecName "kube-api-access-dr57h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.252462 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.277732 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-config-data" (OuterVolumeSpecName: "config-data") pod "3859847c-c005-432e-bb70-93463fa68585" (UID: "3859847c-c005-432e-bb70-93463fa68585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.282408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3859847c-c005-432e-bb70-93463fa68585" (UID: "3859847c-c005-432e-bb70-93463fa68585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.336675 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-config-data\") pod \"6960b5bc-d1ea-44c3-9c64-df99e679db45\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.336892 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ntvx\" (UniqueName: \"kubernetes.io/projected/6960b5bc-d1ea-44c3-9c64-df99e679db45-kube-api-access-2ntvx\") pod \"6960b5bc-d1ea-44c3-9c64-df99e679db45\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.337010 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-combined-ca-bundle\") pod \"6960b5bc-d1ea-44c3-9c64-df99e679db45\" (UID: \"6960b5bc-d1ea-44c3-9c64-df99e679db45\") " Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.337369 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.337445 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3859847c-c005-432e-bb70-93463fa68585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.337508 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3859847c-c005-432e-bb70-93463fa68585-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.337567 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/3859847c-c005-432e-bb70-93463fa68585-kube-api-access-dr57h\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.339396 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6960b5bc-d1ea-44c3-9c64-df99e679db45-kube-api-access-2ntvx" (OuterVolumeSpecName: "kube-api-access-2ntvx") pod "6960b5bc-d1ea-44c3-9c64-df99e679db45" (UID: "6960b5bc-d1ea-44c3-9c64-df99e679db45"). InnerVolumeSpecName "kube-api-access-2ntvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.363326 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-config-data" (OuterVolumeSpecName: "config-data") pod "6960b5bc-d1ea-44c3-9c64-df99e679db45" (UID: "6960b5bc-d1ea-44c3-9c64-df99e679db45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.363813 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6960b5bc-d1ea-44c3-9c64-df99e679db45" (UID: "6960b5bc-d1ea-44c3-9c64-df99e679db45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.439950 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ntvx\" (UniqueName: \"kubernetes.io/projected/6960b5bc-d1ea-44c3-9c64-df99e679db45-kube-api-access-2ntvx\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.440253 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.440378 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6960b5bc-d1ea-44c3-9c64-df99e679db45-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.975443 4861 generic.go:334] "Generic (PLEG): container finished" podID="fa46daf6-db14-4383-8df7-79bcbc7e8cac" containerID="eea00f8a95bd2e50aebc8f8bacd0f8d43e3b775eb31231554a291504efd96a47" exitCode=0 Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.975509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wmlzl" event={"ID":"fa46daf6-db14-4383-8df7-79bcbc7e8cac","Type":"ContainerDied","Data":"eea00f8a95bd2e50aebc8f8bacd0f8d43e3b775eb31231554a291504efd96a47"} Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.984636 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.989960 4861 generic.go:334] "Generic (PLEG): container finished" podID="6960b5bc-d1ea-44c3-9c64-df99e679db45" containerID="b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae" exitCode=0 Feb 19 14:48:09 crc kubenswrapper[4861]: I0219 14:48:09.990258 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.011075 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3859847c-c005-432e-bb70-93463fa68585","Type":"ContainerDied","Data":"a3ba98513a3ae9abd5bb6c61662576e793c37b0d8867b6c3ed48d610dcdba1c1"} Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.011157 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6960b5bc-d1ea-44c3-9c64-df99e679db45","Type":"ContainerDied","Data":"b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae"} Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.011176 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6960b5bc-d1ea-44c3-9c64-df99e679db45","Type":"ContainerDied","Data":"89b134358e398b05c4ec16da9cef286a1d1f11ffea187e36fe7135f5db1d2cac"} Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.011195 4861 scope.go:117] "RemoveContainer" containerID="0cf01e0f3d70417e509f801b80590071bd600bf9c0f315a535bf1c7c0e56e739" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.033529 4861 scope.go:117] "RemoveContainer" containerID="b297df9d420b80235b145efc9ca172cfc9489f9b1a0297673cc9c18145e69178" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.061578 4861 scope.go:117] "RemoveContainer" containerID="b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.082180 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.093546 4861 scope.go:117] "RemoveContainer" containerID="b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae" Feb 19 14:48:10 crc kubenswrapper[4861]: E0219 14:48:10.096883 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae\": container with ID starting with b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae not found: ID does not exist" containerID="b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.096940 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae"} err="failed to get container status \"b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae\": rpc error: code = NotFound desc = could not find container \"b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae\": container with ID starting with b1d12bab143094df8764ce5639b76ef753f556939b2a3dfc5116905254f165ae not found: ID does not exist" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.098727 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.112686 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.125334 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.146611 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: E0219 14:48:10.147039 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-metadata" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147056 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-metadata" Feb 19 14:48:10 crc kubenswrapper[4861]: E0219 14:48:10.147071 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-log" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147078 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-log" Feb 19 14:48:10 crc kubenswrapper[4861]: E0219 14:48:10.147110 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6960b5bc-d1ea-44c3-9c64-df99e679db45" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147117 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6960b5bc-d1ea-44c3-9c64-df99e679db45" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147274 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-log" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147296 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3859847c-c005-432e-bb70-93463fa68585" containerName="nova-metadata-metadata" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147316 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6960b5bc-d1ea-44c3-9c64-df99e679db45" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.147950 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.150446 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.150598 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.150706 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.154309 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.155893 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.157397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.157517 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.157572 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.157638 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-542ll\" (UniqueName: \"kubernetes.io/projected/7da7a051-88c9-40df-b46c-7c0e1cf651a4-kube-api-access-542ll\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.157663 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.158212 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.158415 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.174663 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.188215 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259263 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-logs\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259455 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259484 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-config-data\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259537 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259586 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259621 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259656 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttpb\" (UniqueName: \"kubernetes.io/projected/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-kube-api-access-9ttpb\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-542ll\" (UniqueName: \"kubernetes.io/projected/7da7a051-88c9-40df-b46c-7c0e1cf651a4-kube-api-access-542ll\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.259747 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.275408 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.275594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.275607 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.275849 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7da7a051-88c9-40df-b46c-7c0e1cf651a4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.278357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-542ll\" (UniqueName: \"kubernetes.io/projected/7da7a051-88c9-40df-b46c-7c0e1cf651a4-kube-api-access-542ll\") pod \"nova-cell1-novncproxy-0\" (UID: \"7da7a051-88c9-40df-b46c-7c0e1cf651a4\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.360742 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-config-data\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.360857 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.360917 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.361085 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttpb\" (UniqueName: \"kubernetes.io/projected/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-kube-api-access-9ttpb\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.362050 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-logs\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.362642 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-logs\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.365140 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-config-data\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.365409 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.371458 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.377822 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttpb\" (UniqueName: \"kubernetes.io/projected/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-kube-api-access-9ttpb\") pod \"nova-metadata-0\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.431377 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.463209 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-config-data\") pod \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.463290 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkkzp\" (UniqueName: \"kubernetes.io/projected/5cb8387f-bbdc-4001-af0d-c9aa517e829a-kube-api-access-hkkzp\") pod \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.463355 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-combined-ca-bundle\") pod \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.463496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-scripts\") pod \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\" (UID: \"5cb8387f-bbdc-4001-af0d-c9aa517e829a\") " Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.467892 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-scripts" (OuterVolumeSpecName: "scripts") pod "5cb8387f-bbdc-4001-af0d-c9aa517e829a" (UID: "5cb8387f-bbdc-4001-af0d-c9aa517e829a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.469438 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb8387f-bbdc-4001-af0d-c9aa517e829a-kube-api-access-hkkzp" (OuterVolumeSpecName: "kube-api-access-hkkzp") pod "5cb8387f-bbdc-4001-af0d-c9aa517e829a" (UID: "5cb8387f-bbdc-4001-af0d-c9aa517e829a"). InnerVolumeSpecName "kube-api-access-hkkzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.472959 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.485080 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.500998 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cb8387f-bbdc-4001-af0d-c9aa517e829a" (UID: "5cb8387f-bbdc-4001-af0d-c9aa517e829a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.501647 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-config-data" (OuterVolumeSpecName: "config-data") pod "5cb8387f-bbdc-4001-af0d-c9aa517e829a" (UID: "5cb8387f-bbdc-4001-af0d-c9aa517e829a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.565897 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.566128 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.566156 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkkzp\" (UniqueName: \"kubernetes.io/projected/5cb8387f-bbdc-4001-af0d-c9aa517e829a-kube-api-access-hkkzp\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:10 crc kubenswrapper[4861]: I0219 14:48:10.566174 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb8387f-bbdc-4001-af0d-c9aa517e829a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:10 crc kubenswrapper[4861]: W0219 14:48:10.997735 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7da7a051_88c9_40df_b46c_7c0e1cf651a4.slice/crio-1467b54af092471c0409d377ff37bf9e5f9b6d99696ad215dce9ba54b055333e WatchSource:0}: Error finding container 1467b54af092471c0409d377ff37bf9e5f9b6d99696ad215dce9ba54b055333e: Status 404 returned error can't find the container with id 1467b54af092471c0409d377ff37bf9e5f9b6d99696ad215dce9ba54b055333e Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.010708 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.013579 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.014784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xdc8j" event={"ID":"5cb8387f-bbdc-4001-af0d-c9aa517e829a","Type":"ContainerDied","Data":"193c83764bf942ade53aec265d703d3481a196a8905cd90c7cadfc6b28235bb7"} Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.014824 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="193c83764bf942ade53aec265d703d3481a196a8905cd90c7cadfc6b28235bb7" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.070412 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 14:48:11 crc kubenswrapper[4861]: E0219 14:48:11.070854 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb8387f-bbdc-4001-af0d-c9aa517e829a" containerName="nova-cell1-conductor-db-sync" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.070875 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb8387f-bbdc-4001-af0d-c9aa517e829a" containerName="nova-cell1-conductor-db-sync" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.071058 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb8387f-bbdc-4001-af0d-c9aa517e829a" containerName="nova-cell1-conductor-db-sync" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.071840 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.074875 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.086767 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.099648 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.176859 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.176910 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.176999 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljsct\" (UniqueName: \"kubernetes.io/projected/31cf5b71-d287-40a5-80a6-95e490e99f1b-kube-api-access-ljsct\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.279211 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.279655 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljsct\" (UniqueName: \"kubernetes.io/projected/31cf5b71-d287-40a5-80a6-95e490e99f1b-kube-api-access-ljsct\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.279744 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.295071 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.298669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.302096 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljsct\" (UniqueName: \"kubernetes.io/projected/31cf5b71-d287-40a5-80a6-95e490e99f1b-kube-api-access-ljsct\") pod \"nova-cell1-conductor-0\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.330986 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.381589 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7npzv\" (UniqueName: \"kubernetes.io/projected/fa46daf6-db14-4383-8df7-79bcbc7e8cac-kube-api-access-7npzv\") pod \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.381664 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-combined-ca-bundle\") pod \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.381759 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-scripts\") pod \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.381790 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-config-data\") pod \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\" (UID: \"fa46daf6-db14-4383-8df7-79bcbc7e8cac\") " Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.385586 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-scripts" (OuterVolumeSpecName: "scripts") pod "fa46daf6-db14-4383-8df7-79bcbc7e8cac" (UID: "fa46daf6-db14-4383-8df7-79bcbc7e8cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.391814 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa46daf6-db14-4383-8df7-79bcbc7e8cac-kube-api-access-7npzv" (OuterVolumeSpecName: "kube-api-access-7npzv") pod "fa46daf6-db14-4383-8df7-79bcbc7e8cac" (UID: "fa46daf6-db14-4383-8df7-79bcbc7e8cac"). InnerVolumeSpecName "kube-api-access-7npzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.409718 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-config-data" (OuterVolumeSpecName: "config-data") pod "fa46daf6-db14-4383-8df7-79bcbc7e8cac" (UID: "fa46daf6-db14-4383-8df7-79bcbc7e8cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.425784 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa46daf6-db14-4383-8df7-79bcbc7e8cac" (UID: "fa46daf6-db14-4383-8df7-79bcbc7e8cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.483362 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.483446 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.483461 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7npzv\" (UniqueName: \"kubernetes.io/projected/fa46daf6-db14-4383-8df7-79bcbc7e8cac-kube-api-access-7npzv\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.483485 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa46daf6-db14-4383-8df7-79bcbc7e8cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.501958 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.990212 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3859847c-c005-432e-bb70-93463fa68585" path="/var/lib/kubelet/pods/3859847c-c005-432e-bb70-93463fa68585/volumes" Feb 19 14:48:11 crc kubenswrapper[4861]: I0219 14:48:11.992462 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6960b5bc-d1ea-44c3-9c64-df99e679db45" path="/var/lib/kubelet/pods/6960b5bc-d1ea-44c3-9c64-df99e679db45/volumes" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.004004 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.048651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d47eb27-5c6f-49f1-95b5-74d12f384c6f","Type":"ContainerStarted","Data":"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.048697 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d47eb27-5c6f-49f1-95b5-74d12f384c6f","Type":"ContainerStarted","Data":"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.048707 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d47eb27-5c6f-49f1-95b5-74d12f384c6f","Type":"ContainerStarted","Data":"9d17ba431507c4a20adf3d87c0a05bb3116f20ac07c54c271aa4fbfabbecfeeb"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.051738 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wmlzl" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.052159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wmlzl" event={"ID":"fa46daf6-db14-4383-8df7-79bcbc7e8cac","Type":"ContainerDied","Data":"8cb5b014fd2b622a97ab5a3fe1f6918d407394dc7fb691b795d9e46def90520b"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.052179 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb5b014fd2b622a97ab5a3fe1f6918d407394dc7fb691b795d9e46def90520b" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.053144 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31cf5b71-d287-40a5-80a6-95e490e99f1b","Type":"ContainerStarted","Data":"f4172e2aea184a620ea51a4a42c322c91fbe7e3360b1212066cbd35f093759a8"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.054718 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7da7a051-88c9-40df-b46c-7c0e1cf651a4","Type":"ContainerStarted","Data":"8306a87d5d213935af6e5e8f5c057d5c3e88d95f22a28f34fce14226b64d9966"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.054741 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7da7a051-88c9-40df-b46c-7c0e1cf651a4","Type":"ContainerStarted","Data":"1467b54af092471c0409d377ff37bf9e5f9b6d99696ad215dce9ba54b055333e"} Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.084388 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.084367963 podStartE2EDuration="2.084367963s" podCreationTimestamp="2026-02-19 14:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:12.072903056 +0000 UTC m=+5906.734006274" watchObservedRunningTime="2026-02-19 14:48:12.084367963 +0000 UTC m=+5906.745471191" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.194071 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.194045295 podStartE2EDuration="2.194045295s" podCreationTimestamp="2026-02-19 14:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:12.103959829 +0000 UTC m=+5906.765063057" watchObservedRunningTime="2026-02-19 14:48:12.194045295 +0000 UTC m=+5906.855148533" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.195517 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.195799 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-log" containerID="cri-o://a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e" gracePeriod=30 Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.196316 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-api" containerID="cri-o://5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348" gracePeriod=30 Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.215159 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.215467 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c24c234d-91f1-49c0-84a2-1dc1dc08486e" containerName="nova-scheduler-scheduler" containerID="cri-o://9c7481924a221096998113353687850376d0e229375e64ceb566fe73aef681b5" gracePeriod=30 Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.236208 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.692975 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.724610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-config-data\") pod \"6dd806ed-3e7f-4dda-9746-cfe23435a328\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.724733 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25hmq\" (UniqueName: \"kubernetes.io/projected/6dd806ed-3e7f-4dda-9746-cfe23435a328-kube-api-access-25hmq\") pod \"6dd806ed-3e7f-4dda-9746-cfe23435a328\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.724825 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-combined-ca-bundle\") pod \"6dd806ed-3e7f-4dda-9746-cfe23435a328\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.739091 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd806ed-3e7f-4dda-9746-cfe23435a328-kube-api-access-25hmq" (OuterVolumeSpecName: "kube-api-access-25hmq") pod "6dd806ed-3e7f-4dda-9746-cfe23435a328" (UID: "6dd806ed-3e7f-4dda-9746-cfe23435a328"). InnerVolumeSpecName "kube-api-access-25hmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.751948 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dd806ed-3e7f-4dda-9746-cfe23435a328" (UID: "6dd806ed-3e7f-4dda-9746-cfe23435a328"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.755737 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-config-data" (OuterVolumeSpecName: "config-data") pod "6dd806ed-3e7f-4dda-9746-cfe23435a328" (UID: "6dd806ed-3e7f-4dda-9746-cfe23435a328"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.828240 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dd806ed-3e7f-4dda-9746-cfe23435a328-logs\") pod \"6dd806ed-3e7f-4dda-9746-cfe23435a328\" (UID: \"6dd806ed-3e7f-4dda-9746-cfe23435a328\") " Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.829244 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.829262 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25hmq\" (UniqueName: \"kubernetes.io/projected/6dd806ed-3e7f-4dda-9746-cfe23435a328-kube-api-access-25hmq\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.829273 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd806ed-3e7f-4dda-9746-cfe23435a328-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.829593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dd806ed-3e7f-4dda-9746-cfe23435a328-logs" (OuterVolumeSpecName: "logs") pod "6dd806ed-3e7f-4dda-9746-cfe23435a328" (UID: "6dd806ed-3e7f-4dda-9746-cfe23435a328"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:48:12 crc kubenswrapper[4861]: I0219 14:48:12.930988 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dd806ed-3e7f-4dda-9746-cfe23435a328-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.063788 4861 generic.go:334] "Generic (PLEG): container finished" podID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerID="5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348" exitCode=0 Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.063820 4861 generic.go:334] "Generic (PLEG): container finished" podID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerID="a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e" exitCode=143 Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.063850 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6dd806ed-3e7f-4dda-9746-cfe23435a328","Type":"ContainerDied","Data":"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348"} Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.063876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6dd806ed-3e7f-4dda-9746-cfe23435a328","Type":"ContainerDied","Data":"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e"} Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.063885 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6dd806ed-3e7f-4dda-9746-cfe23435a328","Type":"ContainerDied","Data":"b91173680c5396d407cab281f90f58422e01ce07b3879ad772d258adb2edcea6"} Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.063899 4861 scope.go:117] "RemoveContainer" containerID="5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.064003 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.066748 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31cf5b71-d287-40a5-80a6-95e490e99f1b","Type":"ContainerStarted","Data":"a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2"} Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.096781 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.096764832 podStartE2EDuration="2.096764832s" podCreationTimestamp="2026-02-19 14:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:13.088724636 +0000 UTC m=+5907.749827864" watchObservedRunningTime="2026-02-19 14:48:13.096764832 +0000 UTC m=+5907.757868060" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.109031 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.110742 4861 scope.go:117] "RemoveContainer" containerID="a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.127488 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.137031 4861 scope.go:117] "RemoveContainer" containerID="5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.143331 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:13 crc kubenswrapper[4861]: E0219 14:48:13.143788 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-api" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.143811 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-api" Feb 19 14:48:13 crc kubenswrapper[4861]: E0219 14:48:13.143871 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa46daf6-db14-4383-8df7-79bcbc7e8cac" containerName="nova-manage" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.143883 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa46daf6-db14-4383-8df7-79bcbc7e8cac" containerName="nova-manage" Feb 19 14:48:13 crc kubenswrapper[4861]: E0219 14:48:13.143906 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-log" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.143937 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-log" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.144175 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa46daf6-db14-4383-8df7-79bcbc7e8cac" containerName="nova-manage" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.144202 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-api" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.144217 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" containerName="nova-api-log" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.145494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.157272 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:13 crc kubenswrapper[4861]: E0219 14:48:13.169201 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348\": container with ID starting with 5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348 not found: ID does not exist" containerID="5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.169250 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348"} err="failed to get container status \"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348\": rpc error: code = NotFound desc = could not find container \"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348\": container with ID starting with 5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348 not found: ID does not exist" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.169281 4861 scope.go:117] "RemoveContainer" containerID="a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e" Feb 19 14:48:13 crc kubenswrapper[4861]: E0219 14:48:13.169751 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e\": container with ID starting with a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e not found: ID does not exist" containerID="a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.169787 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e"} err="failed to get container status \"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e\": rpc error: code = NotFound desc = could not find container \"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e\": container with ID starting with a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e not found: ID does not exist" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.169819 4861 scope.go:117] "RemoveContainer" containerID="5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.171959 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348"} err="failed to get container status \"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348\": rpc error: code = NotFound desc = could not find container \"5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348\": container with ID starting with 5663dde23cb61c47118659939929834ad12209b425dcf2fce1973dc510762348 not found: ID does not exist" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.171983 4861 scope.go:117] "RemoveContainer" containerID="a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.172081 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.173827 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e"} err="failed to get container status \"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e\": rpc error: code = NotFound desc = could not find container \"a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e\": container with ID starting with a27376829787b31cbcad7eced047f8a2243ec070a245d9263934c2c1df28188e not found: ID does not exist" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.337940 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-config-data\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.337988 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.338050 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc5e14f3-1dad-4d5c-b664-517021316094-logs\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.338123 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktrfs\" (UniqueName: \"kubernetes.io/projected/dc5e14f3-1dad-4d5c-b664-517021316094-kube-api-access-ktrfs\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.439746 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-config-data\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.439803 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.439875 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc5e14f3-1dad-4d5c-b664-517021316094-logs\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.439978 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktrfs\" (UniqueName: \"kubernetes.io/projected/dc5e14f3-1dad-4d5c-b664-517021316094-kube-api-access-ktrfs\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.441519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc5e14f3-1dad-4d5c-b664-517021316094-logs\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.446948 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-config-data\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.460147 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.472656 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktrfs\" (UniqueName: \"kubernetes.io/projected/dc5e14f3-1dad-4d5c-b664-517021316094-kube-api-access-ktrfs\") pod \"nova-api-0\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " pod="openstack/nova-api-0" Feb 19 14:48:13 crc kubenswrapper[4861]: I0219 14:48:13.486200 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.001026 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd806ed-3e7f-4dda-9746-cfe23435a328" path="/var/lib/kubelet/pods/6dd806ed-3e7f-4dda-9746-cfe23435a328/volumes" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.010618 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.077575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc5e14f3-1dad-4d5c-b664-517021316094","Type":"ContainerStarted","Data":"8efb963d9049990815af76a4e36b41bd946e14ef1007209654e92df6dfbadd7b"} Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.079993 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-log" containerID="cri-o://15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81" gracePeriod=30 Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.080278 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-metadata" containerID="cri-o://13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7" gracePeriod=30 Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.080581 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.376683 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.466794 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc66d6689-67tls"] Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.467198 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc66d6689-67tls" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerName="dnsmasq-dns" containerID="cri-o://3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d" gracePeriod=10 Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.706928 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.873796 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-nova-metadata-tls-certs\") pod \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.873959 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-combined-ca-bundle\") pod \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.874047 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-logs\") pod \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.874076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttpb\" (UniqueName: \"kubernetes.io/projected/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-kube-api-access-9ttpb\") pod \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.874104 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-config-data\") pod \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\" (UID: \"1d47eb27-5c6f-49f1-95b5-74d12f384c6f\") " Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.874593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-logs" (OuterVolumeSpecName: "logs") pod "1d47eb27-5c6f-49f1-95b5-74d12f384c6f" (UID: "1d47eb27-5c6f-49f1-95b5-74d12f384c6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.889242 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-kube-api-access-9ttpb" (OuterVolumeSpecName: "kube-api-access-9ttpb") pod "1d47eb27-5c6f-49f1-95b5-74d12f384c6f" (UID: "1d47eb27-5c6f-49f1-95b5-74d12f384c6f"). InnerVolumeSpecName "kube-api-access-9ttpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.922709 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d47eb27-5c6f-49f1-95b5-74d12f384c6f" (UID: "1d47eb27-5c6f-49f1-95b5-74d12f384c6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.931158 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1d47eb27-5c6f-49f1-95b5-74d12f384c6f" (UID: "1d47eb27-5c6f-49f1-95b5-74d12f384c6f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.935111 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-config-data" (OuterVolumeSpecName: "config-data") pod "1d47eb27-5c6f-49f1-95b5-74d12f384c6f" (UID: "1d47eb27-5c6f-49f1-95b5-74d12f384c6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.975685 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.975939 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttpb\" (UniqueName: \"kubernetes.io/projected/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-kube-api-access-9ttpb\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.975949 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.975960 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:14 crc kubenswrapper[4861]: I0219 14:48:14.975969 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d47eb27-5c6f-49f1-95b5-74d12f384c6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.034200 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089780 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerID="13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7" exitCode=0 Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089807 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerID="15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81" exitCode=143 Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089845 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089857 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d47eb27-5c6f-49f1-95b5-74d12f384c6f","Type":"ContainerDied","Data":"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089905 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d47eb27-5c6f-49f1-95b5-74d12f384c6f","Type":"ContainerDied","Data":"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089920 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d47eb27-5c6f-49f1-95b5-74d12f384c6f","Type":"ContainerDied","Data":"9d17ba431507c4a20adf3d87c0a05bb3116f20ac07c54c271aa4fbfabbecfeeb"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.089936 4861 scope.go:117] "RemoveContainer" containerID="13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.096683 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerID="3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d" exitCode=0 Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.096744 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc66d6689-67tls" event={"ID":"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec","Type":"ContainerDied","Data":"3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.096751 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc66d6689-67tls" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.096762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc66d6689-67tls" event={"ID":"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec","Type":"ContainerDied","Data":"e53ebdbec7db38a7c786e0395dead5b66bb82a9ecd6cb4edb94578c59c6e6ecd"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.099379 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc5e14f3-1dad-4d5c-b664-517021316094","Type":"ContainerStarted","Data":"3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.099397 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc5e14f3-1dad-4d5c-b664-517021316094","Type":"ContainerStarted","Data":"901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3"} Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.121714 4861 scope.go:117] "RemoveContainer" containerID="15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.123518 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.123508839 podStartE2EDuration="2.123508839s" podCreationTimestamp="2026-02-19 14:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:15.120781845 +0000 UTC m=+5909.781885073" watchObservedRunningTime="2026-02-19 14:48:15.123508839 +0000 UTC m=+5909.784612067" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.140457 4861 scope.go:117] "RemoveContainer" containerID="13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.140802 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7\": container with ID starting with 13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7 not found: ID does not exist" containerID="13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.140835 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7"} err="failed to get container status \"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7\": rpc error: code = NotFound desc = could not find container \"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7\": container with ID starting with 13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7 not found: ID does not exist" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.140855 4861 scope.go:117] "RemoveContainer" containerID="15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.141174 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81\": container with ID starting with 15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81 not found: ID does not exist" containerID="15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.141197 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81"} err="failed to get container status \"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81\": rpc error: code = NotFound desc = could not find container \"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81\": container with ID starting with 15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81 not found: ID does not exist" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.141209 4861 scope.go:117] "RemoveContainer" containerID="13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.141371 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7"} err="failed to get container status \"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7\": rpc error: code = NotFound desc = could not find container \"13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7\": container with ID starting with 13d387e4336e4e7f6ba47de5fe4d9bb888e144d70571ae0f2ec3e29e08a228b7 not found: ID does not exist" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.141387 4861 scope.go:117] "RemoveContainer" containerID="15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.141589 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81"} err="failed to get container status \"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81\": rpc error: code = NotFound desc = could not find container \"15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81\": container with ID starting with 15dc0e0540ebb8f73567a83b4805c96bb52bc0ee217324b01977adfc77387d81 not found: ID does not exist" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.141616 4861 scope.go:117] "RemoveContainer" containerID="3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.162258 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.162690 4861 scope.go:117] "RemoveContainer" containerID="d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.174265 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.180921 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-dns-svc\") pod \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.180990 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-config\") pod \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.181033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddc6n\" (UniqueName: \"kubernetes.io/projected/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-kube-api-access-ddc6n\") pod \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.181055 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-nb\") pod \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.181091 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-sb\") pod \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\" (UID: \"2f4bbebd-540a-4a3e-a458-e17b5d0b0bec\") " Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.190116 4861 scope.go:117] "RemoveContainer" containerID="3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.191964 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d\": container with ID starting with 3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d not found: ID does not exist" containerID="3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.192027 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d"} err="failed to get container status \"3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d\": rpc error: code = NotFound desc = could not find container \"3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d\": container with ID starting with 3f972677c2660422c565c4d3d477f420f0c2a18dcec7c112e6b1322bf5878d0d not found: ID does not exist" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.192055 4861 scope.go:117] "RemoveContainer" containerID="d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.192474 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7\": container with ID starting with d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7 not found: ID does not exist" containerID="d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.192497 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7"} err="failed to get container status \"d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7\": rpc error: code = NotFound desc = could not find container \"d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7\": container with ID starting with d662038d3fedec512c873d8ebe2255ff2c6ade8d2110bbee2ec46e5a0a2c9de7 not found: ID does not exist" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.204642 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.205505 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-log" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.205530 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-log" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.205623 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerName="init" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.205635 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerName="init" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.205682 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-metadata" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.205695 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-metadata" Feb 19 14:48:15 crc kubenswrapper[4861]: E0219 14:48:15.205934 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerName="dnsmasq-dns" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.205952 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerName="dnsmasq-dns" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.206679 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-log" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.206721 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" containerName="nova-metadata-metadata" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.206743 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" containerName="dnsmasq-dns" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.213384 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-kube-api-access-ddc6n" (OuterVolumeSpecName: "kube-api-access-ddc6n") pod "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" (UID: "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec"). InnerVolumeSpecName "kube-api-access-ddc6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.217183 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.223596 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.225928 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.237330 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.249393 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" (UID: "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.259519 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-config" (OuterVolumeSpecName: "config") pod "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" (UID: "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.263457 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" (UID: "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.269518 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" (UID: "2f4bbebd-540a-4a3e-a458-e17b5d0b0bec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.282816 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149b49eb-7f49-4bc2-b173-70ac5e737d4b-logs\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.282895 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283005 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-config-data\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283069 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zptw5\" (UniqueName: \"kubernetes.io/projected/149b49eb-7f49-4bc2-b173-70ac5e737d4b-kube-api-access-zptw5\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283098 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283221 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283245 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283258 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddc6n\" (UniqueName: \"kubernetes.io/projected/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-kube-api-access-ddc6n\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283272 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.283284 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.385472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-config-data\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.385848 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zptw5\" (UniqueName: \"kubernetes.io/projected/149b49eb-7f49-4bc2-b173-70ac5e737d4b-kube-api-access-zptw5\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.386012 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.386286 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149b49eb-7f49-4bc2-b173-70ac5e737d4b-logs\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.386536 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.386734 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149b49eb-7f49-4bc2-b173-70ac5e737d4b-logs\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.391527 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-config-data\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.391547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.396262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.408148 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zptw5\" (UniqueName: \"kubernetes.io/projected/149b49eb-7f49-4bc2-b173-70ac5e737d4b-kube-api-access-zptw5\") pod \"nova-metadata-0\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.433926 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc66d6689-67tls"] Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.443939 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc66d6689-67tls"] Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.474624 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.536139 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.989010 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d47eb27-5c6f-49f1-95b5-74d12f384c6f" path="/var/lib/kubelet/pods/1d47eb27-5c6f-49f1-95b5-74d12f384c6f/volumes" Feb 19 14:48:15 crc kubenswrapper[4861]: I0219 14:48:15.990624 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4bbebd-540a-4a3e-a458-e17b5d0b0bec" path="/var/lib/kubelet/pods/2f4bbebd-540a-4a3e-a458-e17b5d0b0bec/volumes" Feb 19 14:48:16 crc kubenswrapper[4861]: W0219 14:48:16.064579 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod149b49eb_7f49_4bc2_b173_70ac5e737d4b.slice/crio-7f1fabda63fef6fe9482e1740b1c598d468106ed56c6b2322af5f892f3c041c6 WatchSource:0}: Error finding container 7f1fabda63fef6fe9482e1740b1c598d468106ed56c6b2322af5f892f3c041c6: Status 404 returned error can't find the container with id 7f1fabda63fef6fe9482e1740b1c598d468106ed56c6b2322af5f892f3c041c6 Feb 19 14:48:16 crc kubenswrapper[4861]: I0219 14:48:16.069021 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:16 crc kubenswrapper[4861]: I0219 14:48:16.152311 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"149b49eb-7f49-4bc2-b173-70ac5e737d4b","Type":"ContainerStarted","Data":"7f1fabda63fef6fe9482e1740b1c598d468106ed56c6b2322af5f892f3c041c6"} Feb 19 14:48:17 crc kubenswrapper[4861]: I0219 14:48:17.199530 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"149b49eb-7f49-4bc2-b173-70ac5e737d4b","Type":"ContainerStarted","Data":"6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595"} Feb 19 14:48:17 crc kubenswrapper[4861]: I0219 14:48:17.199856 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"149b49eb-7f49-4bc2-b173-70ac5e737d4b","Type":"ContainerStarted","Data":"2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438"} Feb 19 14:48:17 crc kubenswrapper[4861]: I0219 14:48:17.228161 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.228131634 podStartE2EDuration="2.228131634s" podCreationTimestamp="2026-02-19 14:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:17.227139958 +0000 UTC m=+5911.888243226" watchObservedRunningTime="2026-02-19 14:48:17.228131634 +0000 UTC m=+5911.889234892" Feb 19 14:48:20 crc kubenswrapper[4861]: I0219 14:48:20.473602 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:20 crc kubenswrapper[4861]: I0219 14:48:20.498839 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:20 crc kubenswrapper[4861]: I0219 14:48:20.536824 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 14:48:20 crc kubenswrapper[4861]: I0219 14:48:20.536941 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 14:48:21 crc kubenswrapper[4861]: I0219 14:48:21.282522 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 14:48:21 crc kubenswrapper[4861]: I0219 14:48:21.552678 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.164947 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-28jg8"] Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.166360 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.169005 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.169141 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.178359 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-28jg8"] Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.237659 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5ghb\" (UniqueName: \"kubernetes.io/projected/a657122f-164e-4978-96c1-b7e9007f76ad-kube-api-access-q5ghb\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.237768 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-scripts\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.237821 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.237900 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-config-data\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.340080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-scripts\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.340362 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.340539 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-config-data\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.340692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5ghb\" (UniqueName: \"kubernetes.io/projected/a657122f-164e-4978-96c1-b7e9007f76ad-kube-api-access-q5ghb\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.346725 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.349671 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-scripts\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.365808 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-config-data\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.381577 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5ghb\" (UniqueName: \"kubernetes.io/projected/a657122f-164e-4978-96c1-b7e9007f76ad-kube-api-access-q5ghb\") pod \"nova-cell1-cell-mapping-28jg8\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:22 crc kubenswrapper[4861]: I0219 14:48:22.513197 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:23 crc kubenswrapper[4861]: I0219 14:48:23.042911 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-28jg8"] Feb 19 14:48:23 crc kubenswrapper[4861]: I0219 14:48:23.278634 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-28jg8" event={"ID":"a657122f-164e-4978-96c1-b7e9007f76ad","Type":"ContainerStarted","Data":"48d346ae6aa9141034d2505c31f4ffe2e8e6141ee509925eedbef977de667241"} Feb 19 14:48:23 crc kubenswrapper[4861]: I0219 14:48:23.278691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-28jg8" event={"ID":"a657122f-164e-4978-96c1-b7e9007f76ad","Type":"ContainerStarted","Data":"a504d4291a6f857498bf8c34c643fc03740d86d97a383cf0729c1e554975789d"} Feb 19 14:48:23 crc kubenswrapper[4861]: I0219 14:48:23.311887 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-28jg8" podStartSLOduration=1.311862559 podStartE2EDuration="1.311862559s" podCreationTimestamp="2026-02-19 14:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:23.300253399 +0000 UTC m=+5917.961356647" watchObservedRunningTime="2026-02-19 14:48:23.311862559 +0000 UTC m=+5917.972965797" Feb 19 14:48:23 crc kubenswrapper[4861]: I0219 14:48:23.487724 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 14:48:23 crc kubenswrapper[4861]: I0219 14:48:23.488772 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 14:48:24 crc kubenswrapper[4861]: I0219 14:48:24.569780 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.95:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:24 crc kubenswrapper[4861]: I0219 14:48:24.570319 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.95:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:25 crc kubenswrapper[4861]: I0219 14:48:25.536838 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 14:48:25 crc kubenswrapper[4861]: I0219 14:48:25.537214 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 14:48:26 crc kubenswrapper[4861]: I0219 14:48:26.550768 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:26 crc kubenswrapper[4861]: I0219 14:48:26.550771 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:28 crc kubenswrapper[4861]: I0219 14:48:28.350482 4861 generic.go:334] "Generic (PLEG): container finished" podID="a657122f-164e-4978-96c1-b7e9007f76ad" containerID="48d346ae6aa9141034d2505c31f4ffe2e8e6141ee509925eedbef977de667241" exitCode=0 Feb 19 14:48:28 crc kubenswrapper[4861]: I0219 14:48:28.350622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-28jg8" event={"ID":"a657122f-164e-4978-96c1-b7e9007f76ad","Type":"ContainerDied","Data":"48d346ae6aa9141034d2505c31f4ffe2e8e6141ee509925eedbef977de667241"} Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.845466 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.946258 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-combined-ca-bundle\") pod \"a657122f-164e-4978-96c1-b7e9007f76ad\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.946341 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-scripts\") pod \"a657122f-164e-4978-96c1-b7e9007f76ad\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.946572 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5ghb\" (UniqueName: \"kubernetes.io/projected/a657122f-164e-4978-96c1-b7e9007f76ad-kube-api-access-q5ghb\") pod \"a657122f-164e-4978-96c1-b7e9007f76ad\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.946617 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-config-data\") pod \"a657122f-164e-4978-96c1-b7e9007f76ad\" (UID: \"a657122f-164e-4978-96c1-b7e9007f76ad\") " Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.952913 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-scripts" (OuterVolumeSpecName: "scripts") pod "a657122f-164e-4978-96c1-b7e9007f76ad" (UID: "a657122f-164e-4978-96c1-b7e9007f76ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.954651 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a657122f-164e-4978-96c1-b7e9007f76ad-kube-api-access-q5ghb" (OuterVolumeSpecName: "kube-api-access-q5ghb") pod "a657122f-164e-4978-96c1-b7e9007f76ad" (UID: "a657122f-164e-4978-96c1-b7e9007f76ad"). InnerVolumeSpecName "kube-api-access-q5ghb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.977006 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a657122f-164e-4978-96c1-b7e9007f76ad" (UID: "a657122f-164e-4978-96c1-b7e9007f76ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:29 crc kubenswrapper[4861]: I0219 14:48:29.998130 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-config-data" (OuterVolumeSpecName: "config-data") pod "a657122f-164e-4978-96c1-b7e9007f76ad" (UID: "a657122f-164e-4978-96c1-b7e9007f76ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.049763 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.049812 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.049835 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a657122f-164e-4978-96c1-b7e9007f76ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.049855 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5ghb\" (UniqueName: \"kubernetes.io/projected/a657122f-164e-4978-96c1-b7e9007f76ad-kube-api-access-q5ghb\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.375147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-28jg8" event={"ID":"a657122f-164e-4978-96c1-b7e9007f76ad","Type":"ContainerDied","Data":"a504d4291a6f857498bf8c34c643fc03740d86d97a383cf0729c1e554975789d"} Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.375536 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a504d4291a6f857498bf8c34c643fc03740d86d97a383cf0729c1e554975789d" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.375178 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-28jg8" Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.652860 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.653436 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-log" containerID="cri-o://901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3" gracePeriod=30 Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.653449 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-api" containerID="cri-o://3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5" gracePeriod=30 Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.669161 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.669560 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-log" containerID="cri-o://2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438" gracePeriod=30 Feb 19 14:48:30 crc kubenswrapper[4861]: I0219 14:48:30.669667 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-metadata" containerID="cri-o://6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595" gracePeriod=30 Feb 19 14:48:31 crc kubenswrapper[4861]: I0219 14:48:31.388351 4861 generic.go:334] "Generic (PLEG): container finished" podID="dc5e14f3-1dad-4d5c-b664-517021316094" containerID="901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3" exitCode=143 Feb 19 14:48:31 crc kubenswrapper[4861]: I0219 14:48:31.388489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc5e14f3-1dad-4d5c-b664-517021316094","Type":"ContainerDied","Data":"901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3"} Feb 19 14:48:31 crc kubenswrapper[4861]: I0219 14:48:31.391640 4861 generic.go:334] "Generic (PLEG): container finished" podID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerID="2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438" exitCode=143 Feb 19 14:48:31 crc kubenswrapper[4861]: I0219 14:48:31.391670 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"149b49eb-7f49-4bc2-b173-70ac5e737d4b","Type":"ContainerDied","Data":"2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438"} Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.247235 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.421649 4861 generic.go:334] "Generic (PLEG): container finished" podID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerID="6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595" exitCode=0 Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.421691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"149b49eb-7f49-4bc2-b173-70ac5e737d4b","Type":"ContainerDied","Data":"6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595"} Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.421695 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.421725 4861 scope.go:117] "RemoveContainer" containerID="6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.421715 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"149b49eb-7f49-4bc2-b173-70ac5e737d4b","Type":"ContainerDied","Data":"7f1fabda63fef6fe9482e1740b1c598d468106ed56c6b2322af5f892f3c041c6"} Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.441985 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149b49eb-7f49-4bc2-b173-70ac5e737d4b-logs\") pod \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.442166 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zptw5\" (UniqueName: \"kubernetes.io/projected/149b49eb-7f49-4bc2-b173-70ac5e737d4b-kube-api-access-zptw5\") pod \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.442201 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-config-data\") pod \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.442244 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-nova-metadata-tls-certs\") pod \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.442293 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-combined-ca-bundle\") pod \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\" (UID: \"149b49eb-7f49-4bc2-b173-70ac5e737d4b\") " Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.442451 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b49eb-7f49-4bc2-b173-70ac5e737d4b-logs" (OuterVolumeSpecName: "logs") pod "149b49eb-7f49-4bc2-b173-70ac5e737d4b" (UID: "149b49eb-7f49-4bc2-b173-70ac5e737d4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.442706 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/149b49eb-7f49-4bc2-b173-70ac5e737d4b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.455589 4861 scope.go:117] "RemoveContainer" containerID="2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.455848 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b49eb-7f49-4bc2-b173-70ac5e737d4b-kube-api-access-zptw5" (OuterVolumeSpecName: "kube-api-access-zptw5") pod "149b49eb-7f49-4bc2-b173-70ac5e737d4b" (UID: "149b49eb-7f49-4bc2-b173-70ac5e737d4b"). InnerVolumeSpecName "kube-api-access-zptw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.506605 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-config-data" (OuterVolumeSpecName: "config-data") pod "149b49eb-7f49-4bc2-b173-70ac5e737d4b" (UID: "149b49eb-7f49-4bc2-b173-70ac5e737d4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.531596 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "149b49eb-7f49-4bc2-b173-70ac5e737d4b" (UID: "149b49eb-7f49-4bc2-b173-70ac5e737d4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.544963 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zptw5\" (UniqueName: \"kubernetes.io/projected/149b49eb-7f49-4bc2-b173-70ac5e737d4b-kube-api-access-zptw5\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.545011 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.545024 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.555584 4861 scope.go:117] "RemoveContainer" containerID="6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595" Feb 19 14:48:34 crc kubenswrapper[4861]: E0219 14:48:34.555962 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595\": container with ID starting with 6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595 not found: ID does not exist" containerID="6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.556009 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595"} err="failed to get container status \"6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595\": rpc error: code = NotFound desc = could not find container \"6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595\": container with ID starting with 6b1b029f1ec79195a0129e3318983431f1e005f735c7504352d8c72cae673595 not found: ID does not exist" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.556038 4861 scope.go:117] "RemoveContainer" containerID="2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438" Feb 19 14:48:34 crc kubenswrapper[4861]: E0219 14:48:34.556294 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438\": container with ID starting with 2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438 not found: ID does not exist" containerID="2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.556338 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438"} err="failed to get container status \"2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438\": rpc error: code = NotFound desc = could not find container \"2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438\": container with ID starting with 2109bce4b28c843a3b3c45239eb93c1af2ddb14d7c314c1a79319d35ede34438 not found: ID does not exist" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.575585 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "149b49eb-7f49-4bc2-b173-70ac5e737d4b" (UID: "149b49eb-7f49-4bc2-b173-70ac5e737d4b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.646946 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/149b49eb-7f49-4bc2-b173-70ac5e737d4b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.808143 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.824615 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.834637 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:34 crc kubenswrapper[4861]: E0219 14:48:34.835066 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-log" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.835083 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-log" Feb 19 14:48:34 crc kubenswrapper[4861]: E0219 14:48:34.835107 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-metadata" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.835114 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-metadata" Feb 19 14:48:34 crc kubenswrapper[4861]: E0219 14:48:34.835142 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a657122f-164e-4978-96c1-b7e9007f76ad" containerName="nova-manage" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.835149 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a657122f-164e-4978-96c1-b7e9007f76ad" containerName="nova-manage" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.835319 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a657122f-164e-4978-96c1-b7e9007f76ad" containerName="nova-manage" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.835337 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-metadata" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.835349 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" containerName="nova-metadata-log" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.836321 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.841542 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.841710 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.843805 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.952142 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.952210 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-config-data\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.952332 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-logs\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.952350 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:34 crc kubenswrapper[4861]: I0219 14:48:34.952400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxj9\" (UniqueName: \"kubernetes.io/projected/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-kube-api-access-zhxj9\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.054145 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-config-data\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.054225 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-logs\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.054252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.054285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxj9\" (UniqueName: \"kubernetes.io/projected/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-kube-api-access-zhxj9\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.054357 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.055587 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-logs\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.067379 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.067800 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.070397 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-config-data\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.080822 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxj9\" (UniqueName: \"kubernetes.io/projected/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-kube-api-access-zhxj9\") pod \"nova-metadata-0\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.166164 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.685315 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 14:48:35 crc kubenswrapper[4861]: W0219 14:48:35.695626 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb9f00e8_70f3_49da_a7f9_46fecf68a76d.slice/crio-6836e1f469b0cc2c7dfa913b241354c5d53862492093ad939100a34b5675f714 WatchSource:0}: Error finding container 6836e1f469b0cc2c7dfa913b241354c5d53862492093ad939100a34b5675f714: Status 404 returned error can't find the container with id 6836e1f469b0cc2c7dfa913b241354c5d53862492093ad939100a34b5675f714 Feb 19 14:48:35 crc kubenswrapper[4861]: I0219 14:48:35.994989 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b49eb-7f49-4bc2-b173-70ac5e737d4b" path="/var/lib/kubelet/pods/149b49eb-7f49-4bc2-b173-70ac5e737d4b/volumes" Feb 19 14:48:36 crc kubenswrapper[4861]: I0219 14:48:36.451784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb9f00e8-70f3-49da-a7f9-46fecf68a76d","Type":"ContainerStarted","Data":"2ee7872414ad102175614a53fce2d14b1bad996ed101f0adbb7c26850904e7e8"} Feb 19 14:48:36 crc kubenswrapper[4861]: I0219 14:48:36.451841 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb9f00e8-70f3-49da-a7f9-46fecf68a76d","Type":"ContainerStarted","Data":"0ae430aca406b076ff240e8e321d34e9f036978501f0873df65e18a181e69a0d"} Feb 19 14:48:36 crc kubenswrapper[4861]: I0219 14:48:36.451859 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb9f00e8-70f3-49da-a7f9-46fecf68a76d","Type":"ContainerStarted","Data":"6836e1f469b0cc2c7dfa913b241354c5d53862492093ad939100a34b5675f714"} Feb 19 14:48:36 crc kubenswrapper[4861]: I0219 14:48:36.485158 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.485133792 podStartE2EDuration="2.485133792s" podCreationTimestamp="2026-02-19 14:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:36.473347345 +0000 UTC m=+5931.134450573" watchObservedRunningTime="2026-02-19 14:48:36.485133792 +0000 UTC m=+5931.146237050" Feb 19 14:48:40 crc kubenswrapper[4861]: I0219 14:48:40.167097 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 14:48:40 crc kubenswrapper[4861]: I0219 14:48:40.167635 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.172685 4861 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podfa46daf6-db14-4383-8df7-79bcbc7e8cac"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podfa46daf6-db14-4383-8df7-79bcbc7e8cac] : Timed out while waiting for systemd to remove kubepods-besteffort-podfa46daf6_db14_4383_8df7_79bcbc7e8cac.slice" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.514682 4861 generic.go:334] "Generic (PLEG): container finished" podID="c24c234d-91f1-49c0-84a2-1dc1dc08486e" containerID="9c7481924a221096998113353687850376d0e229375e64ceb566fe73aef681b5" exitCode=137 Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.515060 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c24c234d-91f1-49c0-84a2-1dc1dc08486e","Type":"ContainerDied","Data":"9c7481924a221096998113353687850376d0e229375e64ceb566fe73aef681b5"} Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.656140 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.810230 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5m78\" (UniqueName: \"kubernetes.io/projected/c24c234d-91f1-49c0-84a2-1dc1dc08486e-kube-api-access-f5m78\") pod \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.810328 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-combined-ca-bundle\") pod \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.810356 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-config-data\") pod \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\" (UID: \"c24c234d-91f1-49c0-84a2-1dc1dc08486e\") " Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.816366 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24c234d-91f1-49c0-84a2-1dc1dc08486e-kube-api-access-f5m78" (OuterVolumeSpecName: "kube-api-access-f5m78") pod "c24c234d-91f1-49c0-84a2-1dc1dc08486e" (UID: "c24c234d-91f1-49c0-84a2-1dc1dc08486e"). InnerVolumeSpecName "kube-api-access-f5m78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.840154 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-config-data" (OuterVolumeSpecName: "config-data") pod "c24c234d-91f1-49c0-84a2-1dc1dc08486e" (UID: "c24c234d-91f1-49c0-84a2-1dc1dc08486e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.854543 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c24c234d-91f1-49c0-84a2-1dc1dc08486e" (UID: "c24c234d-91f1-49c0-84a2-1dc1dc08486e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.913416 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.913497 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5m78\" (UniqueName: \"kubernetes.io/projected/c24c234d-91f1-49c0-84a2-1dc1dc08486e-kube-api-access-f5m78\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:42 crc kubenswrapper[4861]: I0219 14:48:42.913517 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c234d-91f1-49c0-84a2-1dc1dc08486e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.487381 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.487824 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.530341 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c24c234d-91f1-49c0-84a2-1dc1dc08486e","Type":"ContainerDied","Data":"4d373f8d3134556173f090d0d58f49ec79cb77b7cc677e40771d57883bc021d4"} Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.530507 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.530804 4861 scope.go:117] "RemoveContainer" containerID="9c7481924a221096998113353687850376d0e229375e64ceb566fe73aef681b5" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.616025 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.634793 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.652599 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:43 crc kubenswrapper[4861]: E0219 14:48:43.653165 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24c234d-91f1-49c0-84a2-1dc1dc08486e" containerName="nova-scheduler-scheduler" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.653186 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24c234d-91f1-49c0-84a2-1dc1dc08486e" containerName="nova-scheduler-scheduler" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.653447 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24c234d-91f1-49c0-84a2-1dc1dc08486e" containerName="nova-scheduler-scheduler" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.654254 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.658163 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.664887 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.833782 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5r6d\" (UniqueName: \"kubernetes.io/projected/4fd2c176-d104-4058-9b92-db8937b2fa68-kube-api-access-b5r6d\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.833855 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.833891 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-config-data\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.936108 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5r6d\" (UniqueName: \"kubernetes.io/projected/4fd2c176-d104-4058-9b92-db8937b2fa68-kube-api-access-b5r6d\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.936176 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.936208 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-config-data\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.941224 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.944172 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-config-data\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.957198 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5r6d\" (UniqueName: \"kubernetes.io/projected/4fd2c176-d104-4058-9b92-db8937b2fa68-kube-api-access-b5r6d\") pod \"nova-scheduler-0\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.980973 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 14:48:43 crc kubenswrapper[4861]: I0219 14:48:43.992486 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24c234d-91f1-49c0-84a2-1dc1dc08486e" path="/var/lib/kubelet/pods/c24c234d-91f1-49c0-84a2-1dc1dc08486e/volumes" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.514909 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.518945 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.550709 4861 generic.go:334] "Generic (PLEG): container finished" podID="dc5e14f3-1dad-4d5c-b664-517021316094" containerID="3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5" exitCode=0 Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.551127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc5e14f3-1dad-4d5c-b664-517021316094","Type":"ContainerDied","Data":"3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5"} Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.551161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc5e14f3-1dad-4d5c-b664-517021316094","Type":"ContainerDied","Data":"8efb963d9049990815af76a4e36b41bd946e14ef1007209654e92df6dfbadd7b"} Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.551181 4861 scope.go:117] "RemoveContainer" containerID="3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.551312 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.562833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4fd2c176-d104-4058-9b92-db8937b2fa68","Type":"ContainerStarted","Data":"725cbd0e4def07ed88a7337b1b741da9c0059d9a1b46457635898ef61c518e02"} Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.610170 4861 scope.go:117] "RemoveContainer" containerID="901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.651812 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc5e14f3-1dad-4d5c-b664-517021316094-logs\") pod \"dc5e14f3-1dad-4d5c-b664-517021316094\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.651900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-combined-ca-bundle\") pod \"dc5e14f3-1dad-4d5c-b664-517021316094\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.652084 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktrfs\" (UniqueName: \"kubernetes.io/projected/dc5e14f3-1dad-4d5c-b664-517021316094-kube-api-access-ktrfs\") pod \"dc5e14f3-1dad-4d5c-b664-517021316094\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.652134 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-config-data\") pod \"dc5e14f3-1dad-4d5c-b664-517021316094\" (UID: \"dc5e14f3-1dad-4d5c-b664-517021316094\") " Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.652347 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5e14f3-1dad-4d5c-b664-517021316094-logs" (OuterVolumeSpecName: "logs") pod "dc5e14f3-1dad-4d5c-b664-517021316094" (UID: "dc5e14f3-1dad-4d5c-b664-517021316094"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.652766 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc5e14f3-1dad-4d5c-b664-517021316094-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.653334 4861 scope.go:117] "RemoveContainer" containerID="3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5" Feb 19 14:48:44 crc kubenswrapper[4861]: E0219 14:48:44.654620 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5\": container with ID starting with 3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5 not found: ID does not exist" containerID="3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.654679 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5"} err="failed to get container status \"3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5\": rpc error: code = NotFound desc = could not find container \"3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5\": container with ID starting with 3ee6b4299a5b7b3c654e792d81d39c8d8188368a39b1b421fa05469018c5c5b5 not found: ID does not exist" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.654713 4861 scope.go:117] "RemoveContainer" containerID="901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3" Feb 19 14:48:44 crc kubenswrapper[4861]: E0219 14:48:44.655054 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3\": container with ID starting with 901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3 not found: ID does not exist" containerID="901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.655158 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3"} err="failed to get container status \"901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3\": rpc error: code = NotFound desc = could not find container \"901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3\": container with ID starting with 901a4a31a5168e1fde27284a3f543cb03cd1bb3e4f93203a467464301db9bad3 not found: ID does not exist" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.656841 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5e14f3-1dad-4d5c-b664-517021316094-kube-api-access-ktrfs" (OuterVolumeSpecName: "kube-api-access-ktrfs") pod "dc5e14f3-1dad-4d5c-b664-517021316094" (UID: "dc5e14f3-1dad-4d5c-b664-517021316094"). InnerVolumeSpecName "kube-api-access-ktrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.688259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-config-data" (OuterVolumeSpecName: "config-data") pod "dc5e14f3-1dad-4d5c-b664-517021316094" (UID: "dc5e14f3-1dad-4d5c-b664-517021316094"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.715617 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc5e14f3-1dad-4d5c-b664-517021316094" (UID: "dc5e14f3-1dad-4d5c-b664-517021316094"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.754523 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktrfs\" (UniqueName: \"kubernetes.io/projected/dc5e14f3-1dad-4d5c-b664-517021316094-kube-api-access-ktrfs\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.754566 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.754578 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5e14f3-1dad-4d5c-b664-517021316094-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.896220 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.907036 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.926557 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:44 crc kubenswrapper[4861]: E0219 14:48:44.927156 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-api" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.927188 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-api" Feb 19 14:48:44 crc kubenswrapper[4861]: E0219 14:48:44.927236 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-log" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.927249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-log" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.927596 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-api" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.927637 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" containerName="nova-api-log" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.929292 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.932847 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 14:48:44 crc kubenswrapper[4861]: I0219 14:48:44.941949 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.060255 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbdd\" (UniqueName: \"kubernetes.io/projected/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-kube-api-access-pbbdd\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.060672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-config-data\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.060695 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.060739 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-logs\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.162404 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbbdd\" (UniqueName: \"kubernetes.io/projected/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-kube-api-access-pbbdd\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.162916 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-config-data\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.163051 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.163222 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-logs\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.164961 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-logs\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.166967 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.167147 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.179389 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.195052 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-config-data\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.201897 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbbdd\" (UniqueName: \"kubernetes.io/projected/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-kube-api-access-pbbdd\") pod \"nova-api-0\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.320231 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.577096 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4fd2c176-d104-4058-9b92-db8937b2fa68","Type":"ContainerStarted","Data":"a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599"} Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.610930 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.61089746 podStartE2EDuration="2.61089746s" podCreationTimestamp="2026-02-19 14:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:45.600341596 +0000 UTC m=+5940.261444824" watchObservedRunningTime="2026-02-19 14:48:45.61089746 +0000 UTC m=+5940.272000698" Feb 19 14:48:45 crc kubenswrapper[4861]: I0219 14:48:45.819255 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.008815 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5e14f3-1dad-4d5c-b664-517021316094" path="/var/lib/kubelet/pods/dc5e14f3-1dad-4d5c-b664-517021316094/volumes" Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.184856 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.184874 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.598065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8002e2b6-32e4-444f-aa40-bb2bcef9dc12","Type":"ContainerStarted","Data":"dd393c52fffdc5d45c1d21b9397a157d1a621bcde232ab79baa291d40c166730"} Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.598127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8002e2b6-32e4-444f-aa40-bb2bcef9dc12","Type":"ContainerStarted","Data":"88eaad6662ce5f1eb021be562dbb17f4ed1618e7924f1f00c5152f219dba8a60"} Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.598148 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8002e2b6-32e4-444f-aa40-bb2bcef9dc12","Type":"ContainerStarted","Data":"899bd85921d1d38597ab539763017f4a1d4294fb14da353e303939f0b600d967"} Feb 19 14:48:46 crc kubenswrapper[4861]: I0219 14:48:46.621000 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.620978725 podStartE2EDuration="2.620978725s" podCreationTimestamp="2026-02-19 14:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:48:46.617944384 +0000 UTC m=+5941.279047612" watchObservedRunningTime="2026-02-19 14:48:46.620978725 +0000 UTC m=+5941.282081973" Feb 19 14:48:48 crc kubenswrapper[4861]: I0219 14:48:48.981532 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 14:48:53 crc kubenswrapper[4861]: I0219 14:48:53.996846 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 14:48:54 crc kubenswrapper[4861]: I0219 14:48:54.038576 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 14:48:54 crc kubenswrapper[4861]: I0219 14:48:54.741665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 14:48:55 crc kubenswrapper[4861]: I0219 14:48:55.175137 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 14:48:55 crc kubenswrapper[4861]: I0219 14:48:55.180217 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 14:48:55 crc kubenswrapper[4861]: I0219 14:48:55.186279 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 14:48:55 crc kubenswrapper[4861]: I0219 14:48:55.321963 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 14:48:55 crc kubenswrapper[4861]: I0219 14:48:55.322499 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 14:48:55 crc kubenswrapper[4861]: I0219 14:48:55.706665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 14:48:56 crc kubenswrapper[4861]: I0219 14:48:56.404644 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.100:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 14:48:56 crc kubenswrapper[4861]: I0219 14:48:56.404639 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.100:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 14:49:03 crc kubenswrapper[4861]: I0219 14:49:03.834501 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:49:03 crc kubenswrapper[4861]: I0219 14:49:03.835168 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:49:05 crc kubenswrapper[4861]: I0219 14:49:05.326434 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 14:49:05 crc kubenswrapper[4861]: I0219 14:49:05.327009 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 14:49:05 crc kubenswrapper[4861]: I0219 14:49:05.330204 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 14:49:05 crc kubenswrapper[4861]: I0219 14:49:05.345216 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 14:49:05 crc kubenswrapper[4861]: I0219 14:49:05.831026 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 14:49:05 crc kubenswrapper[4861]: I0219 14:49:05.833995 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.021128 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79985d99f7-gzt8f"] Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.022843 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.057165 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79985d99f7-gzt8f"] Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.176094 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-dns-svc\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.176195 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-nb\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.176228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67wj\" (UniqueName: \"kubernetes.io/projected/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-kube-api-access-j67wj\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.176279 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-config\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.176302 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-sb\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.278237 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-config\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.278306 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-sb\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.278408 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-dns-svc\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.278504 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-nb\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.278543 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67wj\" (UniqueName: \"kubernetes.io/projected/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-kube-api-access-j67wj\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.279893 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-config\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.281134 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-dns-svc\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.281356 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-sb\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.281395 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-nb\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.303894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67wj\" (UniqueName: \"kubernetes.io/projected/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-kube-api-access-j67wj\") pod \"dnsmasq-dns-79985d99f7-gzt8f\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.365940 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:06 crc kubenswrapper[4861]: W0219 14:49:06.921732 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dea5b00_6936_48f1_a8aa_aea402e7a2ce.slice/crio-dbf413750f026f5426032f6410fb5515d65140f42a5f3e2b337b7c177143cece WatchSource:0}: Error finding container dbf413750f026f5426032f6410fb5515d65140f42a5f3e2b337b7c177143cece: Status 404 returned error can't find the container with id dbf413750f026f5426032f6410fb5515d65140f42a5f3e2b337b7c177143cece Feb 19 14:49:06 crc kubenswrapper[4861]: I0219 14:49:06.922324 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79985d99f7-gzt8f"] Feb 19 14:49:07 crc kubenswrapper[4861]: I0219 14:49:07.848292 4861 generic.go:334] "Generic (PLEG): container finished" podID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerID="1f7deaefc3dbefd71372fcfe106d7844e7551a3175824504064e614a65d6df66" exitCode=0 Feb 19 14:49:07 crc kubenswrapper[4861]: I0219 14:49:07.850651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" event={"ID":"9dea5b00-6936-48f1-a8aa-aea402e7a2ce","Type":"ContainerDied","Data":"1f7deaefc3dbefd71372fcfe106d7844e7551a3175824504064e614a65d6df66"} Feb 19 14:49:07 crc kubenswrapper[4861]: I0219 14:49:07.850691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" event={"ID":"9dea5b00-6936-48f1-a8aa-aea402e7a2ce","Type":"ContainerStarted","Data":"dbf413750f026f5426032f6410fb5515d65140f42a5f3e2b337b7c177143cece"} Feb 19 14:49:08 crc kubenswrapper[4861]: I0219 14:49:08.864044 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" event={"ID":"9dea5b00-6936-48f1-a8aa-aea402e7a2ce","Type":"ContainerStarted","Data":"c1e95a226e0b3a44ae9147da639ea26164321a982554cb366f5cfafde6131be7"} Feb 19 14:49:08 crc kubenswrapper[4861]: I0219 14:49:08.864677 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:08 crc kubenswrapper[4861]: I0219 14:49:08.903394 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" podStartSLOduration=3.903367958 podStartE2EDuration="3.903367958s" podCreationTimestamp="2026-02-19 14:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:49:08.893110213 +0000 UTC m=+5963.554213441" watchObservedRunningTime="2026-02-19 14:49:08.903367958 +0000 UTC m=+5963.564471196" Feb 19 14:49:09 crc kubenswrapper[4861]: I0219 14:49:09.372343 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:49:09 crc kubenswrapper[4861]: I0219 14:49:09.373022 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-log" containerID="cri-o://88eaad6662ce5f1eb021be562dbb17f4ed1618e7924f1f00c5152f219dba8a60" gracePeriod=30 Feb 19 14:49:09 crc kubenswrapper[4861]: I0219 14:49:09.373129 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-api" containerID="cri-o://dd393c52fffdc5d45c1d21b9397a157d1a621bcde232ab79baa291d40c166730" gracePeriod=30 Feb 19 14:49:09 crc kubenswrapper[4861]: I0219 14:49:09.875168 4861 generic.go:334] "Generic (PLEG): container finished" podID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerID="88eaad6662ce5f1eb021be562dbb17f4ed1618e7924f1f00c5152f219dba8a60" exitCode=143 Feb 19 14:49:09 crc kubenswrapper[4861]: I0219 14:49:09.876322 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8002e2b6-32e4-444f-aa40-bb2bcef9dc12","Type":"ContainerDied","Data":"88eaad6662ce5f1eb021be562dbb17f4ed1618e7924f1f00c5152f219dba8a60"} Feb 19 14:49:12 crc kubenswrapper[4861]: I0219 14:49:12.928738 4861 generic.go:334] "Generic (PLEG): container finished" podID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerID="dd393c52fffdc5d45c1d21b9397a157d1a621bcde232ab79baa291d40c166730" exitCode=0 Feb 19 14:49:12 crc kubenswrapper[4861]: I0219 14:49:12.928816 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8002e2b6-32e4-444f-aa40-bb2bcef9dc12","Type":"ContainerDied","Data":"dd393c52fffdc5d45c1d21b9397a157d1a621bcde232ab79baa291d40c166730"} Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.124943 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.216951 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbbdd\" (UniqueName: \"kubernetes.io/projected/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-kube-api-access-pbbdd\") pod \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.217089 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-combined-ca-bundle\") pod \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.217127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-logs\") pod \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.217183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-config-data\") pod \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\" (UID: \"8002e2b6-32e4-444f-aa40-bb2bcef9dc12\") " Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.218005 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-logs" (OuterVolumeSpecName: "logs") pod "8002e2b6-32e4-444f-aa40-bb2bcef9dc12" (UID: "8002e2b6-32e4-444f-aa40-bb2bcef9dc12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.228938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-kube-api-access-pbbdd" (OuterVolumeSpecName: "kube-api-access-pbbdd") pod "8002e2b6-32e4-444f-aa40-bb2bcef9dc12" (UID: "8002e2b6-32e4-444f-aa40-bb2bcef9dc12"). InnerVolumeSpecName "kube-api-access-pbbdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.262836 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-config-data" (OuterVolumeSpecName: "config-data") pod "8002e2b6-32e4-444f-aa40-bb2bcef9dc12" (UID: "8002e2b6-32e4-444f-aa40-bb2bcef9dc12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.267566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8002e2b6-32e4-444f-aa40-bb2bcef9dc12" (UID: "8002e2b6-32e4-444f-aa40-bb2bcef9dc12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.319035 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbbdd\" (UniqueName: \"kubernetes.io/projected/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-kube-api-access-pbbdd\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.319068 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.319078 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.319087 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002e2b6-32e4-444f-aa40-bb2bcef9dc12-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.940966 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8002e2b6-32e4-444f-aa40-bb2bcef9dc12","Type":"ContainerDied","Data":"899bd85921d1d38597ab539763017f4a1d4294fb14da353e303939f0b600d967"} Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.941059 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.941293 4861 scope.go:117] "RemoveContainer" containerID="dd393c52fffdc5d45c1d21b9397a157d1a621bcde232ab79baa291d40c166730" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.971368 4861 scope.go:117] "RemoveContainer" containerID="88eaad6662ce5f1eb021be562dbb17f4ed1618e7924f1f00c5152f219dba8a60" Feb 19 14:49:13 crc kubenswrapper[4861]: I0219 14:49:13.998950 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.006564 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.025359 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 14:49:14 crc kubenswrapper[4861]: E0219 14:49:14.026068 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-log" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.026101 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-log" Feb 19 14:49:14 crc kubenswrapper[4861]: E0219 14:49:14.026144 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-api" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.026157 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-api" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.026471 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-log" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.026507 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" containerName="nova-api-api" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.028354 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.031374 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.031374 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.035029 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.039565 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.139230 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-public-tls-certs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.139351 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.139398 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-config-data\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.139508 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-logs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.139542 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wxb\" (UniqueName: \"kubernetes.io/projected/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-kube-api-access-s2wxb\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.139641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-logs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241297 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wxb\" (UniqueName: \"kubernetes.io/projected/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-kube-api-access-s2wxb\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241532 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-public-tls-certs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241659 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241719 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-config-data\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.241995 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-logs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.246726 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.248935 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-config-data\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.249620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.251507 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-public-tls-certs\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.264283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wxb\" (UniqueName: \"kubernetes.io/projected/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-kube-api-access-s2wxb\") pod \"nova-api-0\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.361660 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.851900 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 14:49:14 crc kubenswrapper[4861]: W0219 14:49:14.856772 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a686dc0_b187_47d8_a90c_1db5eea1d4e7.slice/crio-109dc8ec52d99792d59e09e3a65ebe5fab6776f1142f83d181e0c814b2511280 WatchSource:0}: Error finding container 109dc8ec52d99792d59e09e3a65ebe5fab6776f1142f83d181e0c814b2511280: Status 404 returned error can't find the container with id 109dc8ec52d99792d59e09e3a65ebe5fab6776f1142f83d181e0c814b2511280 Feb 19 14:49:14 crc kubenswrapper[4861]: I0219 14:49:14.949542 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a686dc0-b187-47d8-a90c-1db5eea1d4e7","Type":"ContainerStarted","Data":"109dc8ec52d99792d59e09e3a65ebe5fab6776f1142f83d181e0c814b2511280"} Feb 19 14:49:15 crc kubenswrapper[4861]: I0219 14:49:15.961482 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a686dc0-b187-47d8-a90c-1db5eea1d4e7","Type":"ContainerStarted","Data":"1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506"} Feb 19 14:49:15 crc kubenswrapper[4861]: I0219 14:49:15.961933 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a686dc0-b187-47d8-a90c-1db5eea1d4e7","Type":"ContainerStarted","Data":"c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5"} Feb 19 14:49:15 crc kubenswrapper[4861]: I0219 14:49:15.982376 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.982354161 podStartE2EDuration="2.982354161s" podCreationTimestamp="2026-02-19 14:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:49:15.981457478 +0000 UTC m=+5970.642560716" watchObservedRunningTime="2026-02-19 14:49:15.982354161 +0000 UTC m=+5970.643457399" Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.004394 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8002e2b6-32e4-444f-aa40-bb2bcef9dc12" path="/var/lib/kubelet/pods/8002e2b6-32e4-444f-aa40-bb2bcef9dc12/volumes" Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.367621 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.419797 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4bc8756f-nmf6c"] Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.420048 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" containerName="dnsmasq-dns" containerID="cri-o://e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48" gracePeriod=10 Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.965017 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.970556 4861 generic.go:334] "Generic (PLEG): container finished" podID="24443d9b-b1c9-4647-8e75-918f50110f68" containerID="e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48" exitCode=0 Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.970631 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.970727 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" event={"ID":"24443d9b-b1c9-4647-8e75-918f50110f68","Type":"ContainerDied","Data":"e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48"} Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.970772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc8756f-nmf6c" event={"ID":"24443d9b-b1c9-4647-8e75-918f50110f68","Type":"ContainerDied","Data":"ebd64785ce3a8e89fd5dfacdd41fef889548fee1825000a47e7e051b1f2305c5"} Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.970797 4861 scope.go:117] "RemoveContainer" containerID="e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48" Feb 19 14:49:16 crc kubenswrapper[4861]: I0219 14:49:16.999657 4861 scope.go:117] "RemoveContainer" containerID="3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.027388 4861 scope.go:117] "RemoveContainer" containerID="e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48" Feb 19 14:49:17 crc kubenswrapper[4861]: E0219 14:49:17.027938 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48\": container with ID starting with e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48 not found: ID does not exist" containerID="e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.027970 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48"} err="failed to get container status \"e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48\": rpc error: code = NotFound desc = could not find container \"e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48\": container with ID starting with e64312c81fe4a0f2b62a73a06597e4cea8613eb134251b9df8b2b2746e4a2a48 not found: ID does not exist" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.027991 4861 scope.go:117] "RemoveContainer" containerID="3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb" Feb 19 14:49:17 crc kubenswrapper[4861]: E0219 14:49:17.028184 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb\": container with ID starting with 3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb not found: ID does not exist" containerID="3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.028202 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb"} err="failed to get container status \"3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb\": rpc error: code = NotFound desc = could not find container \"3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb\": container with ID starting with 3053c0c1cd5c1709ea3a705131d1c53648ddb9638fcc96e2095fcdaa32bc05bb not found: ID does not exist" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.102897 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-nb\") pod \"24443d9b-b1c9-4647-8e75-918f50110f68\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.102991 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-sb\") pod \"24443d9b-b1c9-4647-8e75-918f50110f68\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.103065 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-config\") pod \"24443d9b-b1c9-4647-8e75-918f50110f68\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.103226 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-dns-svc\") pod \"24443d9b-b1c9-4647-8e75-918f50110f68\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.103320 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2jdj\" (UniqueName: \"kubernetes.io/projected/24443d9b-b1c9-4647-8e75-918f50110f68-kube-api-access-q2jdj\") pod \"24443d9b-b1c9-4647-8e75-918f50110f68\" (UID: \"24443d9b-b1c9-4647-8e75-918f50110f68\") " Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.113257 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24443d9b-b1c9-4647-8e75-918f50110f68-kube-api-access-q2jdj" (OuterVolumeSpecName: "kube-api-access-q2jdj") pod "24443d9b-b1c9-4647-8e75-918f50110f68" (UID: "24443d9b-b1c9-4647-8e75-918f50110f68"). InnerVolumeSpecName "kube-api-access-q2jdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.153912 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24443d9b-b1c9-4647-8e75-918f50110f68" (UID: "24443d9b-b1c9-4647-8e75-918f50110f68"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.169353 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-config" (OuterVolumeSpecName: "config") pod "24443d9b-b1c9-4647-8e75-918f50110f68" (UID: "24443d9b-b1c9-4647-8e75-918f50110f68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.171512 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24443d9b-b1c9-4647-8e75-918f50110f68" (UID: "24443d9b-b1c9-4647-8e75-918f50110f68"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.175109 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24443d9b-b1c9-4647-8e75-918f50110f68" (UID: "24443d9b-b1c9-4647-8e75-918f50110f68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.205846 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.205882 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.205892 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.205900 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24443d9b-b1c9-4647-8e75-918f50110f68-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.205910 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2jdj\" (UniqueName: \"kubernetes.io/projected/24443d9b-b1c9-4647-8e75-918f50110f68-kube-api-access-q2jdj\") on node \"crc\" DevicePath \"\"" Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.298136 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4bc8756f-nmf6c"] Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.305019 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c4bc8756f-nmf6c"] Feb 19 14:49:17 crc kubenswrapper[4861]: I0219 14:49:17.998921 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" path="/var/lib/kubelet/pods/24443d9b-b1c9-4647-8e75-918f50110f68/volumes" Feb 19 14:49:24 crc kubenswrapper[4861]: I0219 14:49:24.362895 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 14:49:24 crc kubenswrapper[4861]: I0219 14:49:24.363891 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 14:49:25 crc kubenswrapper[4861]: I0219 14:49:25.375627 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.102:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 14:49:25 crc kubenswrapper[4861]: I0219 14:49:25.375640 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.102:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 14:49:26 crc kubenswrapper[4861]: I0219 14:49:26.093215 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e543-account-create-update-9xtc6"] Feb 19 14:49:26 crc kubenswrapper[4861]: I0219 14:49:26.106286 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pcpkc"] Feb 19 14:49:26 crc kubenswrapper[4861]: I0219 14:49:26.115639 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e543-account-create-update-9xtc6"] Feb 19 14:49:26 crc kubenswrapper[4861]: I0219 14:49:26.125306 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pcpkc"] Feb 19 14:49:27 crc kubenswrapper[4861]: I0219 14:49:27.992494 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88155c43-9d6e-4602-a77e-43a05fd5f3a3" path="/var/lib/kubelet/pods/88155c43-9d6e-4602-a77e-43a05fd5f3a3/volumes" Feb 19 14:49:27 crc kubenswrapper[4861]: I0219 14:49:27.993503 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b96a9f-15a7-43be-b323-7784fae3b57f" path="/var/lib/kubelet/pods/d1b96a9f-15a7-43be-b323-7784fae3b57f/volumes" Feb 19 14:49:33 crc kubenswrapper[4861]: I0219 14:49:33.051678 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8rddw"] Feb 19 14:49:33 crc kubenswrapper[4861]: I0219 14:49:33.070126 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8rddw"] Feb 19 14:49:33 crc kubenswrapper[4861]: I0219 14:49:33.834861 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:49:33 crc kubenswrapper[4861]: I0219 14:49:33.834979 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:49:33 crc kubenswrapper[4861]: I0219 14:49:33.997083 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e903ee6f-b6d4-4b6f-9487-56a87b1c444a" path="/var/lib/kubelet/pods/e903ee6f-b6d4-4b6f-9487-56a87b1c444a/volumes" Feb 19 14:49:34 crc kubenswrapper[4861]: I0219 14:49:34.374668 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 14:49:34 crc kubenswrapper[4861]: I0219 14:49:34.375162 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 14:49:34 crc kubenswrapper[4861]: I0219 14:49:34.379526 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 14:49:34 crc kubenswrapper[4861]: I0219 14:49:34.381984 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 14:49:35 crc kubenswrapper[4861]: I0219 14:49:35.189057 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 14:49:35 crc kubenswrapper[4861]: I0219 14:49:35.199493 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 14:49:46 crc kubenswrapper[4861]: I0219 14:49:46.046893 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v799p"] Feb 19 14:49:46 crc kubenswrapper[4861]: I0219 14:49:46.063104 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v799p"] Feb 19 14:49:47 crc kubenswrapper[4861]: I0219 14:49:47.992892 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2625ca96-572a-45aa-9d89-e04784f50306" path="/var/lib/kubelet/pods/2625ca96-572a-45aa-9d89-e04784f50306/volumes" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.673619 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qqdlk"] Feb 19 14:49:53 crc kubenswrapper[4861]: E0219 14:49:53.674770 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" containerName="init" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.674817 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" containerName="init" Feb 19 14:49:53 crc kubenswrapper[4861]: E0219 14:49:53.674840 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" containerName="dnsmasq-dns" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.674850 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" containerName="dnsmasq-dns" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.675164 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="24443d9b-b1c9-4647-8e75-918f50110f68" containerName="dnsmasq-dns" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.676155 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.678090 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.679566 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.679915 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6q7p8" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.685983 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqdlk"] Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.769941 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9h8bn"] Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.777781 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.795982 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9h8bn"] Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.806879 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-log-ovn\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.806947 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-run\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.806987 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrvk\" (UniqueName: \"kubernetes.io/projected/bfd0003e-b24e-49ad-ac09-5426edb96b7f-kube-api-access-fwrvk\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.807104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd0003e-b24e-49ad-ac09-5426edb96b7f-scripts\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.807318 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-run-ovn\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.807381 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd0003e-b24e-49ad-ac09-5426edb96b7f-combined-ca-bundle\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.807455 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd0003e-b24e-49ad-ac09-5426edb96b7f-ovn-controller-tls-certs\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909303 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-run\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-lib\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909515 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-etc-ovs\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909646 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27g7r\" (UniqueName: \"kubernetes.io/projected/164388df-68b4-442e-bb1a-a0f27173cc13-kube-api-access-27g7r\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909733 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-log-ovn\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909777 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-run\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909818 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrvk\" (UniqueName: \"kubernetes.io/projected/bfd0003e-b24e-49ad-ac09-5426edb96b7f-kube-api-access-fwrvk\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909851 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-log\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.909907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd0003e-b24e-49ad-ac09-5426edb96b7f-scripts\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910025 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-log-ovn\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-run-ovn\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-run\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910108 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164388df-68b4-442e-bb1a-a0f27173cc13-scripts\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd0003e-b24e-49ad-ac09-5426edb96b7f-combined-ca-bundle\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910154 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd0003e-b24e-49ad-ac09-5426edb96b7f-ovn-controller-tls-certs\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.910180 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd0003e-b24e-49ad-ac09-5426edb96b7f-var-run-ovn\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.912077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd0003e-b24e-49ad-ac09-5426edb96b7f-scripts\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.916438 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd0003e-b24e-49ad-ac09-5426edb96b7f-ovn-controller-tls-certs\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.918943 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd0003e-b24e-49ad-ac09-5426edb96b7f-combined-ca-bundle\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:53 crc kubenswrapper[4861]: I0219 14:49:53.925327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrvk\" (UniqueName: \"kubernetes.io/projected/bfd0003e-b24e-49ad-ac09-5426edb96b7f-kube-api-access-fwrvk\") pod \"ovn-controller-qqdlk\" (UID: \"bfd0003e-b24e-49ad-ac09-5426edb96b7f\") " pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.012226 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-etc-ovs\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.012841 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27g7r\" (UniqueName: \"kubernetes.io/projected/164388df-68b4-442e-bb1a-a0f27173cc13-kube-api-access-27g7r\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.012384 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-etc-ovs\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.012950 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-log\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.013071 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164388df-68b4-442e-bb1a-a0f27173cc13-scripts\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.013137 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-run\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.013076 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-log\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.013171 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-lib\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.013307 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-lib\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.013493 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/164388df-68b4-442e-bb1a-a0f27173cc13-var-run\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.015378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164388df-68b4-442e-bb1a-a0f27173cc13-scripts\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.015953 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.043277 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27g7r\" (UniqueName: \"kubernetes.io/projected/164388df-68b4-442e-bb1a-a0f27173cc13-kube-api-access-27g7r\") pod \"ovn-controller-ovs-9h8bn\" (UID: \"164388df-68b4-442e-bb1a-a0f27173cc13\") " pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.100613 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:54 crc kubenswrapper[4861]: I0219 14:49:54.526572 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqdlk"] Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.026068 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9h8bn"] Feb 19 14:49:55 crc kubenswrapper[4861]: W0219 14:49:55.044020 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod164388df_68b4_442e_bb1a_a0f27173cc13.slice/crio-deb4024265deb5cb52414e956750e143e2afc02440c09065ef763cae1538f896 WatchSource:0}: Error finding container deb4024265deb5cb52414e956750e143e2afc02440c09065ef763cae1538f896: Status 404 returned error can't find the container with id deb4024265deb5cb52414e956750e143e2afc02440c09065ef763cae1538f896 Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.264795 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-868md"] Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.269492 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.273741 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.307569 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-868md"] Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.347008 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9636860b-6fb2-481c-a56b-ce7b093cb8b7-combined-ca-bundle\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.347095 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6j2n\" (UniqueName: \"kubernetes.io/projected/9636860b-6fb2-481c-a56b-ce7b093cb8b7-kube-api-access-t6j2n\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.347144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9636860b-6fb2-481c-a56b-ce7b093cb8b7-config\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.347175 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9636860b-6fb2-481c-a56b-ce7b093cb8b7-ovs-rundir\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.347301 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9636860b-6fb2-481c-a56b-ce7b093cb8b7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.347338 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9636860b-6fb2-481c-a56b-ce7b093cb8b7-ovn-rundir\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.421677 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9h8bn" event={"ID":"164388df-68b4-442e-bb1a-a0f27173cc13","Type":"ContainerStarted","Data":"a192752535a187866062ad48059e18c2b71acfd0ccb81d4b4c9ac2c7cf6dd904"} Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.421732 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9h8bn" event={"ID":"164388df-68b4-442e-bb1a-a0f27173cc13","Type":"ContainerStarted","Data":"deb4024265deb5cb52414e956750e143e2afc02440c09065ef763cae1538f896"} Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.442127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk" event={"ID":"bfd0003e-b24e-49ad-ac09-5426edb96b7f","Type":"ContainerStarted","Data":"efab9632334f1115d460ad3407032679e3c9681f20246def3f79dba5677f9b65"} Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.442184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk" event={"ID":"bfd0003e-b24e-49ad-ac09-5426edb96b7f","Type":"ContainerStarted","Data":"f6ad0ac570b9e498cc6e23e0c961163ee6259734c6045ba355540f85ef929d64"} Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.442443 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qqdlk" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.448575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9636860b-6fb2-481c-a56b-ce7b093cb8b7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.448628 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9636860b-6fb2-481c-a56b-ce7b093cb8b7-ovn-rundir\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.448691 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9636860b-6fb2-481c-a56b-ce7b093cb8b7-combined-ca-bundle\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.448733 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6j2n\" (UniqueName: \"kubernetes.io/projected/9636860b-6fb2-481c-a56b-ce7b093cb8b7-kube-api-access-t6j2n\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.448771 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9636860b-6fb2-481c-a56b-ce7b093cb8b7-config\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.448791 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9636860b-6fb2-481c-a56b-ce7b093cb8b7-ovs-rundir\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.449168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9636860b-6fb2-481c-a56b-ce7b093cb8b7-ovs-rundir\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.449304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9636860b-6fb2-481c-a56b-ce7b093cb8b7-ovn-rundir\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.450024 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9636860b-6fb2-481c-a56b-ce7b093cb8b7-config\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.454394 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9636860b-6fb2-481c-a56b-ce7b093cb8b7-combined-ca-bundle\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.465982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9636860b-6fb2-481c-a56b-ce7b093cb8b7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.469143 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qqdlk" podStartSLOduration=2.469128149 podStartE2EDuration="2.469128149s" podCreationTimestamp="2026-02-19 14:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:49:55.465535373 +0000 UTC m=+6010.126638621" watchObservedRunningTime="2026-02-19 14:49:55.469128149 +0000 UTC m=+6010.130231377" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.479823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6j2n\" (UniqueName: \"kubernetes.io/projected/9636860b-6fb2-481c-a56b-ce7b093cb8b7-kube-api-access-t6j2n\") pod \"ovn-controller-metrics-868md\" (UID: \"9636860b-6fb2-481c-a56b-ce7b093cb8b7\") " pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:55 crc kubenswrapper[4861]: I0219 14:49:55.608934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-868md" Feb 19 14:49:56 crc kubenswrapper[4861]: I0219 14:49:56.086025 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-868md"] Feb 19 14:49:56 crc kubenswrapper[4861]: I0219 14:49:56.450966 4861 generic.go:334] "Generic (PLEG): container finished" podID="164388df-68b4-442e-bb1a-a0f27173cc13" containerID="a192752535a187866062ad48059e18c2b71acfd0ccb81d4b4c9ac2c7cf6dd904" exitCode=0 Feb 19 14:49:56 crc kubenswrapper[4861]: I0219 14:49:56.451027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9h8bn" event={"ID":"164388df-68b4-442e-bb1a-a0f27173cc13","Type":"ContainerDied","Data":"a192752535a187866062ad48059e18c2b71acfd0ccb81d4b4c9ac2c7cf6dd904"} Feb 19 14:49:56 crc kubenswrapper[4861]: I0219 14:49:56.452956 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-868md" event={"ID":"9636860b-6fb2-481c-a56b-ce7b093cb8b7","Type":"ContainerStarted","Data":"04fda5d1be410833d6fa5bce5d3fad1d51c916b97144343b9de5b65e80ff7c02"} Feb 19 14:49:56 crc kubenswrapper[4861]: I0219 14:49:56.453071 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-868md" event={"ID":"9636860b-6fb2-481c-a56b-ce7b093cb8b7","Type":"ContainerStarted","Data":"96ee57be3b740e8d80e1c82430fbe1fb1c2a83e6ffb52c8f952581cd5b803f40"} Feb 19 14:49:56 crc kubenswrapper[4861]: I0219 14:49:56.517908 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-868md" podStartSLOduration=1.5178864939999999 podStartE2EDuration="1.517886494s" podCreationTimestamp="2026-02-19 14:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:49:56.508884472 +0000 UTC m=+6011.169987700" watchObservedRunningTime="2026-02-19 14:49:56.517886494 +0000 UTC m=+6011.178989732" Feb 19 14:49:57 crc kubenswrapper[4861]: I0219 14:49:57.466884 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9h8bn" event={"ID":"164388df-68b4-442e-bb1a-a0f27173cc13","Type":"ContainerStarted","Data":"a5f1709d7ac5dc106c28e3211d6a470603641cb43430667ea11937e558612dcf"} Feb 19 14:49:57 crc kubenswrapper[4861]: I0219 14:49:57.467299 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9h8bn" event={"ID":"164388df-68b4-442e-bb1a-a0f27173cc13","Type":"ContainerStarted","Data":"4bde4c222ba82f8d295b515b7626f91331a83591aed231af5eaab89e1ae6cf94"} Feb 19 14:49:57 crc kubenswrapper[4861]: I0219 14:49:57.500371 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9h8bn" podStartSLOduration=4.50035279 podStartE2EDuration="4.50035279s" podCreationTimestamp="2026-02-19 14:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:49:57.492334884 +0000 UTC m=+6012.153438122" watchObservedRunningTime="2026-02-19 14:49:57.50035279 +0000 UTC m=+6012.161456018" Feb 19 14:49:58 crc kubenswrapper[4861]: I0219 14:49:58.477129 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:49:58 crc kubenswrapper[4861]: I0219 14:49:58.478200 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:50:03 crc kubenswrapper[4861]: I0219 14:50:03.834603 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:50:03 crc kubenswrapper[4861]: I0219 14:50:03.835583 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:50:03 crc kubenswrapper[4861]: I0219 14:50:03.835652 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:50:03 crc kubenswrapper[4861]: I0219 14:50:03.836785 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:50:03 crc kubenswrapper[4861]: I0219 14:50:03.836887 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" gracePeriod=600 Feb 19 14:50:03 crc kubenswrapper[4861]: E0219 14:50:03.969436 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.274042 4861 scope.go:117] "RemoveContainer" containerID="a4828d12f64ebd2d63693b3998d910bbf679448da5a1d146e13f10bee22e5aa2" Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.311236 4861 scope.go:117] "RemoveContainer" containerID="cf529279aec1041debc0751a79088911f3ef8439d5033064b85b194f57887ff9" Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.384372 4861 scope.go:117] "RemoveContainer" containerID="9814a8456c638cbc3e80b966402a00f1f80442602b0da4e19b45f2cd1ef4f719" Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.413673 4861 scope.go:117] "RemoveContainer" containerID="d3d208b9fd8c961f3ae6bc8dcfab32e93884363fed8b65ce4e16c984cc002cd9" Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.563707 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" exitCode=0 Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.563942 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d"} Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.563999 4861 scope.go:117] "RemoveContainer" containerID="857acc905a3020c924da1c5bc09451d3afdf4f6b0afc35920779725d181fa1fb" Feb 19 14:50:04 crc kubenswrapper[4861]: I0219 14:50:04.565348 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:50:04 crc kubenswrapper[4861]: E0219 14:50:04.565939 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.632645 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-trfks"] Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.636713 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.661177 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-trfks"] Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.715034 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24702168-bec0-4c40-8c69-d813349d00df-operator-scripts\") pod \"octavia-db-create-trfks\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.715162 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbkd\" (UniqueName: \"kubernetes.io/projected/24702168-bec0-4c40-8c69-d813349d00df-kube-api-access-wzbkd\") pod \"octavia-db-create-trfks\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.817931 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24702168-bec0-4c40-8c69-d813349d00df-operator-scripts\") pod \"octavia-db-create-trfks\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.818119 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbkd\" (UniqueName: \"kubernetes.io/projected/24702168-bec0-4c40-8c69-d813349d00df-kube-api-access-wzbkd\") pod \"octavia-db-create-trfks\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.818943 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24702168-bec0-4c40-8c69-d813349d00df-operator-scripts\") pod \"octavia-db-create-trfks\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.843719 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbkd\" (UniqueName: \"kubernetes.io/projected/24702168-bec0-4c40-8c69-d813349d00df-kube-api-access-wzbkd\") pod \"octavia-db-create-trfks\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " pod="openstack/octavia-db-create-trfks" Feb 19 14:50:12 crc kubenswrapper[4861]: I0219 14:50:12.964700 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-trfks" Feb 19 14:50:13 crc kubenswrapper[4861]: I0219 14:50:13.554323 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-trfks"] Feb 19 14:50:13 crc kubenswrapper[4861]: I0219 14:50:13.671975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-trfks" event={"ID":"24702168-bec0-4c40-8c69-d813349d00df","Type":"ContainerStarted","Data":"4978ab66b58f5447e0a457a46e9e97a0e458df9dac9cf9e79f8a3222a8b3f62f"} Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.012450 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-f9ad-account-create-update-lcn8p"] Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.013588 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.015470 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.025138 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-f9ad-account-create-update-lcn8p"] Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.147403 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjncc\" (UniqueName: \"kubernetes.io/projected/66a13647-b213-48d9-b56e-c1d2f25623d3-kube-api-access-cjncc\") pod \"octavia-f9ad-account-create-update-lcn8p\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.147489 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a13647-b213-48d9-b56e-c1d2f25623d3-operator-scripts\") pod \"octavia-f9ad-account-create-update-lcn8p\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.249768 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjncc\" (UniqueName: \"kubernetes.io/projected/66a13647-b213-48d9-b56e-c1d2f25623d3-kube-api-access-cjncc\") pod \"octavia-f9ad-account-create-update-lcn8p\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.249829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a13647-b213-48d9-b56e-c1d2f25623d3-operator-scripts\") pod \"octavia-f9ad-account-create-update-lcn8p\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.250556 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a13647-b213-48d9-b56e-c1d2f25623d3-operator-scripts\") pod \"octavia-f9ad-account-create-update-lcn8p\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.269024 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjncc\" (UniqueName: \"kubernetes.io/projected/66a13647-b213-48d9-b56e-c1d2f25623d3-kube-api-access-cjncc\") pod \"octavia-f9ad-account-create-update-lcn8p\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.355391 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.685251 4861 generic.go:334] "Generic (PLEG): container finished" podID="24702168-bec0-4c40-8c69-d813349d00df" containerID="0d0eac7a35cf47b714239245356d377006b57b914502db663b6a9d65705b8d0d" exitCode=0 Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.685330 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-trfks" event={"ID":"24702168-bec0-4c40-8c69-d813349d00df","Type":"ContainerDied","Data":"0d0eac7a35cf47b714239245356d377006b57b914502db663b6a9d65705b8d0d"} Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.840487 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-f9ad-account-create-update-lcn8p"] Feb 19 14:50:14 crc kubenswrapper[4861]: I0219 14:50:14.978142 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:50:14 crc kubenswrapper[4861]: E0219 14:50:14.978577 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:50:15 crc kubenswrapper[4861]: I0219 14:50:15.696830 4861 generic.go:334] "Generic (PLEG): container finished" podID="66a13647-b213-48d9-b56e-c1d2f25623d3" containerID="ff153f73e072e8c1cebaf34e0b42405f648d2ccdea5c449af038740cb85257a8" exitCode=0 Feb 19 14:50:15 crc kubenswrapper[4861]: I0219 14:50:15.696907 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-f9ad-account-create-update-lcn8p" event={"ID":"66a13647-b213-48d9-b56e-c1d2f25623d3","Type":"ContainerDied","Data":"ff153f73e072e8c1cebaf34e0b42405f648d2ccdea5c449af038740cb85257a8"} Feb 19 14:50:15 crc kubenswrapper[4861]: I0219 14:50:15.697184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-f9ad-account-create-update-lcn8p" event={"ID":"66a13647-b213-48d9-b56e-c1d2f25623d3","Type":"ContainerStarted","Data":"2aa3743db5b2ac74467668ac34c60fded1fc2244c5f0c3b51e0a960b4bd5388c"} Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.100752 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-trfks" Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.192164 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24702168-bec0-4c40-8c69-d813349d00df-operator-scripts\") pod \"24702168-bec0-4c40-8c69-d813349d00df\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.192277 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzbkd\" (UniqueName: \"kubernetes.io/projected/24702168-bec0-4c40-8c69-d813349d00df-kube-api-access-wzbkd\") pod \"24702168-bec0-4c40-8c69-d813349d00df\" (UID: \"24702168-bec0-4c40-8c69-d813349d00df\") " Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.192619 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24702168-bec0-4c40-8c69-d813349d00df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24702168-bec0-4c40-8c69-d813349d00df" (UID: "24702168-bec0-4c40-8c69-d813349d00df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.200873 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24702168-bec0-4c40-8c69-d813349d00df-kube-api-access-wzbkd" (OuterVolumeSpecName: "kube-api-access-wzbkd") pod "24702168-bec0-4c40-8c69-d813349d00df" (UID: "24702168-bec0-4c40-8c69-d813349d00df"). InnerVolumeSpecName "kube-api-access-wzbkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.294782 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzbkd\" (UniqueName: \"kubernetes.io/projected/24702168-bec0-4c40-8c69-d813349d00df-kube-api-access-wzbkd\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.294844 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24702168-bec0-4c40-8c69-d813349d00df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.708741 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-trfks" event={"ID":"24702168-bec0-4c40-8c69-d813349d00df","Type":"ContainerDied","Data":"4978ab66b58f5447e0a457a46e9e97a0e458df9dac9cf9e79f8a3222a8b3f62f"} Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.708771 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-trfks" Feb 19 14:50:16 crc kubenswrapper[4861]: I0219 14:50:16.708790 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4978ab66b58f5447e0a457a46e9e97a0e458df9dac9cf9e79f8a3222a8b3f62f" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.130161 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.214674 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjncc\" (UniqueName: \"kubernetes.io/projected/66a13647-b213-48d9-b56e-c1d2f25623d3-kube-api-access-cjncc\") pod \"66a13647-b213-48d9-b56e-c1d2f25623d3\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.214840 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a13647-b213-48d9-b56e-c1d2f25623d3-operator-scripts\") pod \"66a13647-b213-48d9-b56e-c1d2f25623d3\" (UID: \"66a13647-b213-48d9-b56e-c1d2f25623d3\") " Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.216159 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a13647-b213-48d9-b56e-c1d2f25623d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66a13647-b213-48d9-b56e-c1d2f25623d3" (UID: "66a13647-b213-48d9-b56e-c1d2f25623d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.222516 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a13647-b213-48d9-b56e-c1d2f25623d3-kube-api-access-cjncc" (OuterVolumeSpecName: "kube-api-access-cjncc") pod "66a13647-b213-48d9-b56e-c1d2f25623d3" (UID: "66a13647-b213-48d9-b56e-c1d2f25623d3"). InnerVolumeSpecName "kube-api-access-cjncc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.317599 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a13647-b213-48d9-b56e-c1d2f25623d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.317642 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjncc\" (UniqueName: \"kubernetes.io/projected/66a13647-b213-48d9-b56e-c1d2f25623d3-kube-api-access-cjncc\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.717111 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-f9ad-account-create-update-lcn8p" event={"ID":"66a13647-b213-48d9-b56e-c1d2f25623d3","Type":"ContainerDied","Data":"2aa3743db5b2ac74467668ac34c60fded1fc2244c5f0c3b51e0a960b4bd5388c"} Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.717340 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa3743db5b2ac74467668ac34c60fded1fc2244c5f0c3b51e0a960b4bd5388c" Feb 19 14:50:17 crc kubenswrapper[4861]: I0219 14:50:17.717389 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f9ad-account-create-update-lcn8p" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.971992 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-vzj56"] Feb 19 14:50:19 crc kubenswrapper[4861]: E0219 14:50:19.972556 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a13647-b213-48d9-b56e-c1d2f25623d3" containerName="mariadb-account-create-update" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.972569 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a13647-b213-48d9-b56e-c1d2f25623d3" containerName="mariadb-account-create-update" Feb 19 14:50:19 crc kubenswrapper[4861]: E0219 14:50:19.972597 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24702168-bec0-4c40-8c69-d813349d00df" containerName="mariadb-database-create" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.972603 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="24702168-bec0-4c40-8c69-d813349d00df" containerName="mariadb-database-create" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.972770 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a13647-b213-48d9-b56e-c1d2f25623d3" containerName="mariadb-account-create-update" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.972787 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="24702168-bec0-4c40-8c69-d813349d00df" containerName="mariadb-database-create" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.973349 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:19 crc kubenswrapper[4861]: I0219 14:50:19.991273 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-vzj56"] Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.072952 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b075c9b9-2e89-4f64-aa2b-8abe896117e3-operator-scripts\") pod \"octavia-persistence-db-create-vzj56\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.073041 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfjbp\" (UniqueName: \"kubernetes.io/projected/b075c9b9-2e89-4f64-aa2b-8abe896117e3-kube-api-access-cfjbp\") pod \"octavia-persistence-db-create-vzj56\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.174908 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b075c9b9-2e89-4f64-aa2b-8abe896117e3-operator-scripts\") pod \"octavia-persistence-db-create-vzj56\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.175038 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfjbp\" (UniqueName: \"kubernetes.io/projected/b075c9b9-2e89-4f64-aa2b-8abe896117e3-kube-api-access-cfjbp\") pod \"octavia-persistence-db-create-vzj56\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.175878 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b075c9b9-2e89-4f64-aa2b-8abe896117e3-operator-scripts\") pod \"octavia-persistence-db-create-vzj56\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.195215 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfjbp\" (UniqueName: \"kubernetes.io/projected/b075c9b9-2e89-4f64-aa2b-8abe896117e3-kube-api-access-cfjbp\") pod \"octavia-persistence-db-create-vzj56\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.290582 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:20 crc kubenswrapper[4861]: I0219 14:50:20.802956 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-vzj56"] Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.009149 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-1022-account-create-update-zl65c"] Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.010376 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.012693 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.024557 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-1022-account-create-update-zl65c"] Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.093864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62pj\" (UniqueName: \"kubernetes.io/projected/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-kube-api-access-x62pj\") pod \"octavia-1022-account-create-update-zl65c\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.094313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-operator-scripts\") pod \"octavia-1022-account-create-update-zl65c\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.197184 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x62pj\" (UniqueName: \"kubernetes.io/projected/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-kube-api-access-x62pj\") pod \"octavia-1022-account-create-update-zl65c\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.197345 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-operator-scripts\") pod \"octavia-1022-account-create-update-zl65c\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.198198 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-operator-scripts\") pod \"octavia-1022-account-create-update-zl65c\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.215799 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x62pj\" (UniqueName: \"kubernetes.io/projected/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-kube-api-access-x62pj\") pod \"octavia-1022-account-create-update-zl65c\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.332958 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.802020 4861 generic.go:334] "Generic (PLEG): container finished" podID="b075c9b9-2e89-4f64-aa2b-8abe896117e3" containerID="0b3e46ca5b776f284d4b5c95256396efbc0c44d4a0423a3949461288f7b376c3" exitCode=0 Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.802103 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vzj56" event={"ID":"b075c9b9-2e89-4f64-aa2b-8abe896117e3","Type":"ContainerDied","Data":"0b3e46ca5b776f284d4b5c95256396efbc0c44d4a0423a3949461288f7b376c3"} Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.802347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vzj56" event={"ID":"b075c9b9-2e89-4f64-aa2b-8abe896117e3","Type":"ContainerStarted","Data":"3d7e8c9a08412c6be47d9b978155c28b92c6215204d4409f42bad23a568847ec"} Feb 19 14:50:21 crc kubenswrapper[4861]: I0219 14:50:21.826640 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-1022-account-create-update-zl65c"] Feb 19 14:50:22 crc kubenswrapper[4861]: I0219 14:50:22.816047 4861 generic.go:334] "Generic (PLEG): container finished" podID="68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" containerID="49cc3c3522e503f24259c22aff88f284cd2259182e7eed8d4126c65f8b8902c9" exitCode=0 Feb 19 14:50:22 crc kubenswrapper[4861]: I0219 14:50:22.816153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1022-account-create-update-zl65c" event={"ID":"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8","Type":"ContainerDied","Data":"49cc3c3522e503f24259c22aff88f284cd2259182e7eed8d4126c65f8b8902c9"} Feb 19 14:50:22 crc kubenswrapper[4861]: I0219 14:50:22.816319 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1022-account-create-update-zl65c" event={"ID":"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8","Type":"ContainerStarted","Data":"687610fa01ff163cada27149ae9bb203363647ba10d9597ee38c9241de93a8dd"} Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.179588 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.234003 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b075c9b9-2e89-4f64-aa2b-8abe896117e3-operator-scripts\") pod \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.234136 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfjbp\" (UniqueName: \"kubernetes.io/projected/b075c9b9-2e89-4f64-aa2b-8abe896117e3-kube-api-access-cfjbp\") pod \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\" (UID: \"b075c9b9-2e89-4f64-aa2b-8abe896117e3\") " Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.234892 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b075c9b9-2e89-4f64-aa2b-8abe896117e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b075c9b9-2e89-4f64-aa2b-8abe896117e3" (UID: "b075c9b9-2e89-4f64-aa2b-8abe896117e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.240920 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b075c9b9-2e89-4f64-aa2b-8abe896117e3-kube-api-access-cfjbp" (OuterVolumeSpecName: "kube-api-access-cfjbp") pod "b075c9b9-2e89-4f64-aa2b-8abe896117e3" (UID: "b075c9b9-2e89-4f64-aa2b-8abe896117e3"). InnerVolumeSpecName "kube-api-access-cfjbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.336660 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b075c9b9-2e89-4f64-aa2b-8abe896117e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.336688 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfjbp\" (UniqueName: \"kubernetes.io/projected/b075c9b9-2e89-4f64-aa2b-8abe896117e3-kube-api-access-cfjbp\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.831231 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vzj56" Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.831311 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vzj56" event={"ID":"b075c9b9-2e89-4f64-aa2b-8abe896117e3","Type":"ContainerDied","Data":"3d7e8c9a08412c6be47d9b978155c28b92c6215204d4409f42bad23a568847ec"} Feb 19 14:50:23 crc kubenswrapper[4861]: I0219 14:50:23.831865 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d7e8c9a08412c6be47d9b978155c28b92c6215204d4409f42bad23a568847ec" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.096681 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qqdlk" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.218476 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.356120 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-operator-scripts\") pod \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.356543 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62pj\" (UniqueName: \"kubernetes.io/projected/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-kube-api-access-x62pj\") pod \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\" (UID: \"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8\") " Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.356911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" (UID: "68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.357504 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.365697 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-kube-api-access-x62pj" (OuterVolumeSpecName: "kube-api-access-x62pj") pod "68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" (UID: "68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8"). InnerVolumeSpecName "kube-api-access-x62pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.459081 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x62pj\" (UniqueName: \"kubernetes.io/projected/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8-kube-api-access-x62pj\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.845088 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1022-account-create-update-zl65c" event={"ID":"68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8","Type":"ContainerDied","Data":"687610fa01ff163cada27149ae9bb203363647ba10d9597ee38c9241de93a8dd"} Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.845144 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1022-account-create-update-zl65c" Feb 19 14:50:24 crc kubenswrapper[4861]: I0219 14:50:24.845160 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="687610fa01ff163cada27149ae9bb203363647ba10d9597ee38c9241de93a8dd" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.137753 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-776b749dcb-txl2h"] Feb 19 14:50:27 crc kubenswrapper[4861]: E0219 14:50:27.138565 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b075c9b9-2e89-4f64-aa2b-8abe896117e3" containerName="mariadb-database-create" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.138582 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b075c9b9-2e89-4f64-aa2b-8abe896117e3" containerName="mariadb-database-create" Feb 19 14:50:27 crc kubenswrapper[4861]: E0219 14:50:27.138658 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" containerName="mariadb-account-create-update" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.138668 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" containerName="mariadb-account-create-update" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.138935 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b075c9b9-2e89-4f64-aa2b-8abe896117e3" containerName="mariadb-database-create" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.138963 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" containerName="mariadb-account-create-update" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.140733 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.144039 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.144553 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-5j74l" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.144622 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.155699 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.156740 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-776b749dcb-txl2h"] Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.242319 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-octavia-run\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.242547 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-scripts\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.242649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-config-data\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.242758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-combined-ca-bundle\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.243041 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-config-data-merged\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.243120 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-ovndb-tls-certs\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.345212 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-octavia-run\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.345258 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-scripts\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.345285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-config-data\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.345302 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-combined-ca-bundle\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.345357 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-config-data-merged\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.345382 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-ovndb-tls-certs\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.346327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-config-data-merged\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.346410 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-octavia-run\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.350784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-config-data\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.351238 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-ovndb-tls-certs\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.351273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-scripts\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.356277 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-combined-ca-bundle\") pod \"octavia-api-776b749dcb-txl2h\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:27 crc kubenswrapper[4861]: I0219 14:50:27.463835 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:28 crc kubenswrapper[4861]: W0219 14:50:28.082482 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88850841_3500_4518_a1c7_167bb9afe85b.slice/crio-85de0d7659f8b2a28f557a035a1484f3e3ee5c425d3465aa95ad29ec5e2697d0 WatchSource:0}: Error finding container 85de0d7659f8b2a28f557a035a1484f3e3ee5c425d3465aa95ad29ec5e2697d0: Status 404 returned error can't find the container with id 85de0d7659f8b2a28f557a035a1484f3e3ee5c425d3465aa95ad29ec5e2697d0 Feb 19 14:50:28 crc kubenswrapper[4861]: I0219 14:50:28.085528 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-776b749dcb-txl2h"] Feb 19 14:50:28 crc kubenswrapper[4861]: I0219 14:50:28.087406 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:50:28 crc kubenswrapper[4861]: I0219 14:50:28.887306 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerStarted","Data":"85de0d7659f8b2a28f557a035a1484f3e3ee5c425d3465aa95ad29ec5e2697d0"} Feb 19 14:50:28 crc kubenswrapper[4861]: I0219 14:50:28.977627 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:50:28 crc kubenswrapper[4861]: E0219 14:50:28.978085 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.159865 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.164967 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9h8bn" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.278621 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qqdlk-config-dct76"] Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.279823 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.283944 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.293181 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqdlk-config-dct76"] Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.386488 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.386896 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-log-ovn\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.387065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-scripts\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.387609 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-additional-scripts\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.387782 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run-ovn\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.387891 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnb74\" (UniqueName: \"kubernetes.io/projected/e4932804-5683-4f2f-a8a0-caed895eeb6a-kube-api-access-fnb74\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.491500 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.491909 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-log-ovn\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.491931 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-scripts\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.491961 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-additional-scripts\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.492002 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run-ovn\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.492027 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnb74\" (UniqueName: \"kubernetes.io/projected/e4932804-5683-4f2f-a8a0-caed895eeb6a-kube-api-access-fnb74\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.492470 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.492476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run-ovn\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.492511 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-log-ovn\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.493238 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-additional-scripts\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.494893 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-scripts\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.519248 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnb74\" (UniqueName: \"kubernetes.io/projected/e4932804-5683-4f2f-a8a0-caed895eeb6a-kube-api-access-fnb74\") pod \"ovn-controller-qqdlk-config-dct76\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:29 crc kubenswrapper[4861]: I0219 14:50:29.620745 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:30 crc kubenswrapper[4861]: I0219 14:50:30.153730 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqdlk-config-dct76"] Feb 19 14:50:30 crc kubenswrapper[4861]: I0219 14:50:30.912269 4861 generic.go:334] "Generic (PLEG): container finished" podID="e4932804-5683-4f2f-a8a0-caed895eeb6a" containerID="56717367569dd7d1cc9904b2239ad2c86719d2f8c5b6d7af1885658fa362a533" exitCode=0 Feb 19 14:50:30 crc kubenswrapper[4861]: I0219 14:50:30.912667 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk-config-dct76" event={"ID":"e4932804-5683-4f2f-a8a0-caed895eeb6a","Type":"ContainerDied","Data":"56717367569dd7d1cc9904b2239ad2c86719d2f8c5b6d7af1885658fa362a533"} Feb 19 14:50:30 crc kubenswrapper[4861]: I0219 14:50:30.913112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk-config-dct76" event={"ID":"e4932804-5683-4f2f-a8a0-caed895eeb6a","Type":"ContainerStarted","Data":"d89bd9427a3ecc62aaed283cdd12d4f2398553dbba9b1c6844c2a22b09c56ef4"} Feb 19 14:50:31 crc kubenswrapper[4861]: E0219 14:50:31.026054 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4932804_5683_4f2f_a8a0_caed895eeb6a.slice/crio-56717367569dd7d1cc9904b2239ad2c86719d2f8c5b6d7af1885658fa362a533.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4932804_5683_4f2f_a8a0_caed895eeb6a.slice/crio-conmon-56717367569dd7d1cc9904b2239ad2c86719d2f8c5b6d7af1885658fa362a533.scope\": RecentStats: unable to find data in memory cache]" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.773392 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.867718 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-additional-scripts\") pod \"e4932804-5683-4f2f-a8a0-caed895eeb6a\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.867772 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run\") pod \"e4932804-5683-4f2f-a8a0-caed895eeb6a\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.867848 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnb74\" (UniqueName: \"kubernetes.io/projected/e4932804-5683-4f2f-a8a0-caed895eeb6a-kube-api-access-fnb74\") pod \"e4932804-5683-4f2f-a8a0-caed895eeb6a\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.867986 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run-ovn\") pod \"e4932804-5683-4f2f-a8a0-caed895eeb6a\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868073 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-log-ovn\") pod \"e4932804-5683-4f2f-a8a0-caed895eeb6a\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868103 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-scripts\") pod \"e4932804-5683-4f2f-a8a0-caed895eeb6a\" (UID: \"e4932804-5683-4f2f-a8a0-caed895eeb6a\") " Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868116 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run" (OuterVolumeSpecName: "var-run") pod "e4932804-5683-4f2f-a8a0-caed895eeb6a" (UID: "e4932804-5683-4f2f-a8a0-caed895eeb6a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868202 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e4932804-5683-4f2f-a8a0-caed895eeb6a" (UID: "e4932804-5683-4f2f-a8a0-caed895eeb6a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868505 4861 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868525 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.868826 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e4932804-5683-4f2f-a8a0-caed895eeb6a" (UID: "e4932804-5683-4f2f-a8a0-caed895eeb6a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.869175 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e4932804-5683-4f2f-a8a0-caed895eeb6a" (UID: "e4932804-5683-4f2f-a8a0-caed895eeb6a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.869589 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-scripts" (OuterVolumeSpecName: "scripts") pod "e4932804-5683-4f2f-a8a0-caed895eeb6a" (UID: "e4932804-5683-4f2f-a8a0-caed895eeb6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.873841 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4932804-5683-4f2f-a8a0-caed895eeb6a-kube-api-access-fnb74" (OuterVolumeSpecName: "kube-api-access-fnb74") pod "e4932804-5683-4f2f-a8a0-caed895eeb6a" (UID: "e4932804-5683-4f2f-a8a0-caed895eeb6a"). InnerVolumeSpecName "kube-api-access-fnb74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.970856 4861 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4932804-5683-4f2f-a8a0-caed895eeb6a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.970887 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.970899 4861 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4932804-5683-4f2f-a8a0-caed895eeb6a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.970914 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnb74\" (UniqueName: \"kubernetes.io/projected/e4932804-5683-4f2f-a8a0-caed895eeb6a-kube-api-access-fnb74\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.984551 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-dct76" Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.998681 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk-config-dct76" event={"ID":"e4932804-5683-4f2f-a8a0-caed895eeb6a","Type":"ContainerDied","Data":"d89bd9427a3ecc62aaed283cdd12d4f2398553dbba9b1c6844c2a22b09c56ef4"} Feb 19 14:50:37 crc kubenswrapper[4861]: I0219 14:50:37.998721 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d89bd9427a3ecc62aaed283cdd12d4f2398553dbba9b1c6844c2a22b09c56ef4" Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.876921 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qqdlk-config-dct76"] Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.889288 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qqdlk-config-dct76"] Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.932197 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qqdlk-config-s8htt"] Feb 19 14:50:38 crc kubenswrapper[4861]: E0219 14:50:38.932857 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4932804-5683-4f2f-a8a0-caed895eeb6a" containerName="ovn-config" Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.932884 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4932804-5683-4f2f-a8a0-caed895eeb6a" containerName="ovn-config" Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.933370 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4932804-5683-4f2f-a8a0-caed895eeb6a" containerName="ovn-config" Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.934218 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.936831 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 14:50:38 crc kubenswrapper[4861]: I0219 14:50:38.962697 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqdlk-config-s8htt"] Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.002536 4861 generic.go:334] "Generic (PLEG): container finished" podID="88850841-3500-4518-a1c7-167bb9afe85b" containerID="62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073" exitCode=0 Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.002588 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerDied","Data":"62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073"} Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.095728 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.095768 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run-ovn\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.095790 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-scripts\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.095835 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cgp\" (UniqueName: \"kubernetes.io/projected/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-kube-api-access-k4cgp\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.095854 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-additional-scripts\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.095989 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-log-ovn\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.197569 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4cgp\" (UniqueName: \"kubernetes.io/projected/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-kube-api-access-k4cgp\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.197853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-additional-scripts\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-log-ovn\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198190 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run-ovn\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-scripts\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198730 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198747 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-log-ovn\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.198758 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run-ovn\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.199035 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-additional-scripts\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.200834 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-scripts\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.219290 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4cgp\" (UniqueName: \"kubernetes.io/projected/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-kube-api-access-k4cgp\") pod \"ovn-controller-qqdlk-config-s8htt\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.250892 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.735670 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqdlk-config-s8htt"] Feb 19 14:50:39 crc kubenswrapper[4861]: W0219 14:50:39.742023 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa5c4d4e_cd06_485d_ad9b_e82e4a449214.slice/crio-3d2122772f158744422ec25b6cf17c5e999296fad6dd014978b7ba6e5bb7bbd1 WatchSource:0}: Error finding container 3d2122772f158744422ec25b6cf17c5e999296fad6dd014978b7ba6e5bb7bbd1: Status 404 returned error can't find the container with id 3d2122772f158744422ec25b6cf17c5e999296fad6dd014978b7ba6e5bb7bbd1 Feb 19 14:50:39 crc kubenswrapper[4861]: I0219 14:50:39.989854 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4932804-5683-4f2f-a8a0-caed895eeb6a" path="/var/lib/kubelet/pods/e4932804-5683-4f2f-a8a0-caed895eeb6a/volumes" Feb 19 14:50:40 crc kubenswrapper[4861]: I0219 14:50:40.011720 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk-config-s8htt" event={"ID":"fa5c4d4e-cd06-485d-ad9b-e82e4a449214","Type":"ContainerStarted","Data":"3d2122772f158744422ec25b6cf17c5e999296fad6dd014978b7ba6e5bb7bbd1"} Feb 19 14:50:40 crc kubenswrapper[4861]: I0219 14:50:40.015498 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerStarted","Data":"4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458"} Feb 19 14:50:40 crc kubenswrapper[4861]: I0219 14:50:40.015665 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerStarted","Data":"c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833"} Feb 19 14:50:40 crc kubenswrapper[4861]: I0219 14:50:40.015790 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:40 crc kubenswrapper[4861]: I0219 14:50:40.016039 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:50:40 crc kubenswrapper[4861]: I0219 14:50:40.049507 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-776b749dcb-txl2h" podStartSLOduration=3.37790733 podStartE2EDuration="13.049485467s" podCreationTimestamp="2026-02-19 14:50:27 +0000 UTC" firstStartedPulling="2026-02-19 14:50:28.086335432 +0000 UTC m=+6042.747438670" lastFinishedPulling="2026-02-19 14:50:37.757913589 +0000 UTC m=+6052.419016807" observedRunningTime="2026-02-19 14:50:40.039305622 +0000 UTC m=+6054.700408860" watchObservedRunningTime="2026-02-19 14:50:40.049485467 +0000 UTC m=+6054.710588695" Feb 19 14:50:41 crc kubenswrapper[4861]: I0219 14:50:41.067021 4861 generic.go:334] "Generic (PLEG): container finished" podID="fa5c4d4e-cd06-485d-ad9b-e82e4a449214" containerID="ecc8f3cd66a543efb8916835b9e8b235c75a45251e63a7204c3429b31d33bd00" exitCode=0 Feb 19 14:50:41 crc kubenswrapper[4861]: I0219 14:50:41.067766 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk-config-s8htt" event={"ID":"fa5c4d4e-cd06-485d-ad9b-e82e4a449214","Type":"ContainerDied","Data":"ecc8f3cd66a543efb8916835b9e8b235c75a45251e63a7204c3429b31d33bd00"} Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.533535 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.708392 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run\") pod \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.708916 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run-ovn\") pod \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.709152 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-log-ovn\") pod \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.708633 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run" (OuterVolumeSpecName: "var-run") pod "fa5c4d4e-cd06-485d-ad9b-e82e4a449214" (UID: "fa5c4d4e-cd06-485d-ad9b-e82e4a449214"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.708994 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fa5c4d4e-cd06-485d-ad9b-e82e4a449214" (UID: "fa5c4d4e-cd06-485d-ad9b-e82e4a449214"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.709252 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fa5c4d4e-cd06-485d-ad9b-e82e4a449214" (UID: "fa5c4d4e-cd06-485d-ad9b-e82e4a449214"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.709820 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4cgp\" (UniqueName: \"kubernetes.io/projected/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-kube-api-access-k4cgp\") pod \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.710062 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-scripts\") pod \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.710472 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-additional-scripts\") pod \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\" (UID: \"fa5c4d4e-cd06-485d-ad9b-e82e4a449214\") " Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.714542 4861 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.714847 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.715003 4861 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.711174 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fa5c4d4e-cd06-485d-ad9b-e82e4a449214" (UID: "fa5c4d4e-cd06-485d-ad9b-e82e4a449214"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.711576 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-scripts" (OuterVolumeSpecName: "scripts") pod "fa5c4d4e-cd06-485d-ad9b-e82e4a449214" (UID: "fa5c4d4e-cd06-485d-ad9b-e82e4a449214"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.720823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-kube-api-access-k4cgp" (OuterVolumeSpecName: "kube-api-access-k4cgp") pod "fa5c4d4e-cd06-485d-ad9b-e82e4a449214" (UID: "fa5c4d4e-cd06-485d-ad9b-e82e4a449214"). InnerVolumeSpecName "kube-api-access-k4cgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.817002 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4cgp\" (UniqueName: \"kubernetes.io/projected/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-kube-api-access-k4cgp\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.817041 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:42 crc kubenswrapper[4861]: I0219 14:50:42.817051 4861 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa5c4d4e-cd06-485d-ad9b-e82e4a449214-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.095833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqdlk-config-s8htt" event={"ID":"fa5c4d4e-cd06-485d-ad9b-e82e4a449214","Type":"ContainerDied","Data":"3d2122772f158744422ec25b6cf17c5e999296fad6dd014978b7ba6e5bb7bbd1"} Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.095893 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d2122772f158744422ec25b6cf17c5e999296fad6dd014978b7ba6e5bb7bbd1" Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.095936 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqdlk-config-s8htt" Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.641996 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qqdlk-config-s8htt"] Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.654631 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qqdlk-config-s8htt"] Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.977586 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:50:43 crc kubenswrapper[4861]: E0219 14:50:43.977934 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:50:43 crc kubenswrapper[4861]: I0219 14:50:43.998403 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5c4d4e-cd06-485d-ad9b-e82e4a449214" path="/var/lib/kubelet/pods/fa5c4d4e-cd06-485d-ad9b-e82e4a449214/volumes" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.095520 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-tvz2m"] Feb 19 14:50:57 crc kubenswrapper[4861]: E0219 14:50:57.098298 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5c4d4e-cd06-485d-ad9b-e82e4a449214" containerName="ovn-config" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.098501 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5c4d4e-cd06-485d-ad9b-e82e4a449214" containerName="ovn-config" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.098840 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5c4d4e-cd06-485d-ad9b-e82e4a449214" containerName="ovn-config" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.100891 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.103251 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.104487 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.107648 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.110356 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-tvz2m"] Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.208036 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/43c139b2-075e-4659-af80-1a7e414a7d8c-hm-ports\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.208117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c139b2-075e-4659-af80-1a7e414a7d8c-config-data\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.208140 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/43c139b2-075e-4659-af80-1a7e414a7d8c-config-data-merged\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.208460 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c139b2-075e-4659-af80-1a7e414a7d8c-scripts\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.310905 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c139b2-075e-4659-af80-1a7e414a7d8c-scripts\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.311110 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/43c139b2-075e-4659-af80-1a7e414a7d8c-hm-ports\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.311230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c139b2-075e-4659-af80-1a7e414a7d8c-config-data\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.311277 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/43c139b2-075e-4659-af80-1a7e414a7d8c-config-data-merged\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.311912 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/43c139b2-075e-4659-af80-1a7e414a7d8c-config-data-merged\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.311957 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/43c139b2-075e-4659-af80-1a7e414a7d8c-hm-ports\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.316281 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c139b2-075e-4659-af80-1a7e414a7d8c-scripts\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.319765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c139b2-075e-4659-af80-1a7e414a7d8c-config-data\") pod \"octavia-rsyslog-tvz2m\" (UID: \"43c139b2-075e-4659-af80-1a7e414a7d8c\") " pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:57 crc kubenswrapper[4861]: I0219 14:50:57.428096 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.002685 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dmhkw"] Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.009345 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.018719 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.020774 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dmhkw"] Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.105609 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-tvz2m"] Feb 19 14:50:58 crc kubenswrapper[4861]: W0219 14:50:58.108646 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c139b2_075e_4659_af80_1a7e414a7d8c.slice/crio-0353e3d7bbb14a300993be2a79014ee083a80038d27e7aa34738fada067af04f WatchSource:0}: Error finding container 0353e3d7bbb14a300993be2a79014ee083a80038d27e7aa34738fada067af04f: Status 404 returned error can't find the container with id 0353e3d7bbb14a300993be2a79014ee083a80038d27e7aa34738fada067af04f Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.127809 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8f07875-72e1-4916-8d74-acfdf2db27f6-amphora-image\") pod \"octavia-image-upload-8d4564f8f-dmhkw\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.128222 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8f07875-72e1-4916-8d74-acfdf2db27f6-httpd-config\") pod \"octavia-image-upload-8d4564f8f-dmhkw\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.183064 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-tvz2m"] Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.229804 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8f07875-72e1-4916-8d74-acfdf2db27f6-httpd-config\") pod \"octavia-image-upload-8d4564f8f-dmhkw\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.230001 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8f07875-72e1-4916-8d74-acfdf2db27f6-amphora-image\") pod \"octavia-image-upload-8d4564f8f-dmhkw\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.230549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8f07875-72e1-4916-8d74-acfdf2db27f6-amphora-image\") pod \"octavia-image-upload-8d4564f8f-dmhkw\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.237021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8f07875-72e1-4916-8d74-acfdf2db27f6-httpd-config\") pod \"octavia-image-upload-8d4564f8f-dmhkw\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.296014 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-tvz2m" event={"ID":"43c139b2-075e-4659-af80-1a7e414a7d8c","Type":"ContainerStarted","Data":"0353e3d7bbb14a300993be2a79014ee083a80038d27e7aa34738fada067af04f"} Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.345300 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.868240 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dmhkw"] Feb 19 14:50:58 crc kubenswrapper[4861]: W0219 14:50:58.870347 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8f07875_72e1_4916_8d74_acfdf2db27f6.slice/crio-e30caa7ef39904c7941d06a7cf19862d1f577b65a9e0a6d9e891e29cdb4ac185 WatchSource:0}: Error finding container e30caa7ef39904c7941d06a7cf19862d1f577b65a9e0a6d9e891e29cdb4ac185: Status 404 returned error can't find the container with id e30caa7ef39904c7941d06a7cf19862d1f577b65a9e0a6d9e891e29cdb4ac185 Feb 19 14:50:58 crc kubenswrapper[4861]: I0219 14:50:58.982537 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:50:58 crc kubenswrapper[4861]: E0219 14:50:58.983192 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.047465 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-854dbc447d-kqb5w"] Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.050184 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.050723 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-854dbc447d-kqb5w"] Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.052913 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.059080 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.152672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-ovndb-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.152978 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1d5b4349-480d-4409-a53a-b7a41ed25ea6-config-data-merged\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.153022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-scripts\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.153075 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-public-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.153103 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5b4349-480d-4409-a53a-b7a41ed25ea6-octavia-run\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.153127 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-internal-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.153153 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-config-data\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.153181 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-combined-ca-bundle\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254512 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-scripts\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254601 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-public-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5b4349-480d-4409-a53a-b7a41ed25ea6-octavia-run\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254653 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-internal-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-config-data\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254708 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-combined-ca-bundle\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254735 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-ovndb-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.254778 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1d5b4349-480d-4409-a53a-b7a41ed25ea6-config-data-merged\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.255178 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1d5b4349-480d-4409-a53a-b7a41ed25ea6-config-data-merged\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.255701 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5b4349-480d-4409-a53a-b7a41ed25ea6-octavia-run\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.260377 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-scripts\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.260427 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-public-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.260922 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-internal-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.262105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-ovndb-tls-certs\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.264614 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-combined-ca-bundle\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.265255 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5b4349-480d-4409-a53a-b7a41ed25ea6-config-data\") pod \"octavia-api-854dbc447d-kqb5w\" (UID: \"1d5b4349-480d-4409-a53a-b7a41ed25ea6\") " pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.303773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" event={"ID":"f8f07875-72e1-4916-8d74-acfdf2db27f6","Type":"ContainerStarted","Data":"e30caa7ef39904c7941d06a7cf19862d1f577b65a9e0a6d9e891e29cdb4ac185"} Feb 19 14:50:59 crc kubenswrapper[4861]: I0219 14:50:59.377514 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:51:00 crc kubenswrapper[4861]: I0219 14:51:00.369517 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-854dbc447d-kqb5w"] Feb 19 14:51:00 crc kubenswrapper[4861]: W0219 14:51:00.397065 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d5b4349_480d_4409_a53a_b7a41ed25ea6.slice/crio-396c9eb313a8e34da8011b9e46efe1acd0d89f34808f4bef2ffc9c6a2fb2ff45 WatchSource:0}: Error finding container 396c9eb313a8e34da8011b9e46efe1acd0d89f34808f4bef2ffc9c6a2fb2ff45: Status 404 returned error can't find the container with id 396c9eb313a8e34da8011b9e46efe1acd0d89f34808f4bef2ffc9c6a2fb2ff45 Feb 19 14:51:01 crc kubenswrapper[4861]: I0219 14:51:01.337652 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-tvz2m" event={"ID":"43c139b2-075e-4659-af80-1a7e414a7d8c","Type":"ContainerStarted","Data":"f415c2f5c7d35add0705e186966026afd1df29109267c9f543cdfc92d81346ab"} Feb 19 14:51:01 crc kubenswrapper[4861]: I0219 14:51:01.345680 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d5b4349-480d-4409-a53a-b7a41ed25ea6" containerID="20fa77062b508880580bc539ee941f06d14ed5fef5b93be0e4409bd375884892" exitCode=0 Feb 19 14:51:01 crc kubenswrapper[4861]: I0219 14:51:01.345720 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-854dbc447d-kqb5w" event={"ID":"1d5b4349-480d-4409-a53a-b7a41ed25ea6","Type":"ContainerDied","Data":"20fa77062b508880580bc539ee941f06d14ed5fef5b93be0e4409bd375884892"} Feb 19 14:51:01 crc kubenswrapper[4861]: I0219 14:51:01.345741 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-854dbc447d-kqb5w" event={"ID":"1d5b4349-480d-4409-a53a-b7a41ed25ea6","Type":"ContainerStarted","Data":"396c9eb313a8e34da8011b9e46efe1acd0d89f34808f4bef2ffc9c6a2fb2ff45"} Feb 19 14:51:01 crc kubenswrapper[4861]: I0219 14:51:01.770234 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:51:02 crc kubenswrapper[4861]: I0219 14:51:02.363073 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-854dbc447d-kqb5w" event={"ID":"1d5b4349-480d-4409-a53a-b7a41ed25ea6","Type":"ContainerStarted","Data":"48a6bf448716c34da7959c69c93c9d00c006dba7f8ec9fdac3fb004b0ae27a1c"} Feb 19 14:51:02 crc kubenswrapper[4861]: I0219 14:51:02.363412 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-854dbc447d-kqb5w" event={"ID":"1d5b4349-480d-4409-a53a-b7a41ed25ea6","Type":"ContainerStarted","Data":"0e14cb0fd434f5f9cb32b5ad3ecdd9d80967318932a1610b1b85dc93457e8ad6"} Feb 19 14:51:02 crc kubenswrapper[4861]: I0219 14:51:02.405900 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-854dbc447d-kqb5w" podStartSLOduration=3.405880498 podStartE2EDuration="3.405880498s" podCreationTimestamp="2026-02-19 14:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:51:02.39816575 +0000 UTC m=+6077.059268978" watchObservedRunningTime="2026-02-19 14:51:02.405880498 +0000 UTC m=+6077.066983726" Feb 19 14:51:02 crc kubenswrapper[4861]: I0219 14:51:02.439180 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:51:03 crc kubenswrapper[4861]: I0219 14:51:03.386798 4861 generic.go:334] "Generic (PLEG): container finished" podID="43c139b2-075e-4659-af80-1a7e414a7d8c" containerID="f415c2f5c7d35add0705e186966026afd1df29109267c9f543cdfc92d81346ab" exitCode=0 Feb 19 14:51:03 crc kubenswrapper[4861]: I0219 14:51:03.386909 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-tvz2m" event={"ID":"43c139b2-075e-4659-af80-1a7e414a7d8c","Type":"ContainerDied","Data":"f415c2f5c7d35add0705e186966026afd1df29109267c9f543cdfc92d81346ab"} Feb 19 14:51:03 crc kubenswrapper[4861]: I0219 14:51:03.388323 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:51:03 crc kubenswrapper[4861]: I0219 14:51:03.388358 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:51:04 crc kubenswrapper[4861]: I0219 14:51:04.651080 4861 scope.go:117] "RemoveContainer" containerID="9a9b18f2e9b2cc8b6bc02079c3f112d9990fe7c96d3ae6637634cdb3b081a96a" Feb 19 14:51:04 crc kubenswrapper[4861]: I0219 14:51:04.684270 4861 scope.go:117] "RemoveContainer" containerID="a04930d74b6c70debda497083d7b3fdaef00f42573a6776006ac9193c67230f0" Feb 19 14:51:04 crc kubenswrapper[4861]: I0219 14:51:04.739715 4861 scope.go:117] "RemoveContainer" containerID="accdee46578cf84d1a22413a2517c1250dcb436efcfd89aa8e14fc77c9a58c8d" Feb 19 14:51:06 crc kubenswrapper[4861]: I0219 14:51:06.438816 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-tvz2m" event={"ID":"43c139b2-075e-4659-af80-1a7e414a7d8c","Type":"ContainerStarted","Data":"f2bd4eb191aac08b3397ba265d6f26553fad5f7d227634369dd2a2560d56d93d"} Feb 19 14:51:06 crc kubenswrapper[4861]: I0219 14:51:06.439672 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:51:06 crc kubenswrapper[4861]: I0219 14:51:06.465509 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-tvz2m" podStartSLOduration=1.994847778 podStartE2EDuration="9.465492994s" podCreationTimestamp="2026-02-19 14:50:57 +0000 UTC" firstStartedPulling="2026-02-19 14:50:58.113899531 +0000 UTC m=+6072.775002759" lastFinishedPulling="2026-02-19 14:51:05.584544747 +0000 UTC m=+6080.245647975" observedRunningTime="2026-02-19 14:51:06.462057031 +0000 UTC m=+6081.123160279" watchObservedRunningTime="2026-02-19 14:51:06.465492994 +0000 UTC m=+6081.126596212" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.095645 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-wzzsh"] Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.097642 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.100218 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.110811 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-wzzsh"] Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.191068 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b705df1a-4cae-4c89-af54-bbe87a267540-config-data-merged\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.191141 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-config-data\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.191282 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-combined-ca-bundle\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.191323 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-scripts\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.293750 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b705df1a-4cae-4c89-af54-bbe87a267540-config-data-merged\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.294054 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-config-data\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.294290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-combined-ca-bundle\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.294410 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-scripts\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.294304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b705df1a-4cae-4c89-af54-bbe87a267540-config-data-merged\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.300688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-combined-ca-bundle\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.301402 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-config-data\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.306038 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-scripts\") pod \"octavia-db-sync-wzzsh\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.411952 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.490237 4861 generic.go:334] "Generic (PLEG): container finished" podID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerID="989de08e628537c984db0a80f5a1b2b8dca3935a79e3ae5034b4d66e3b6c89ab" exitCode=0 Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.490298 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" event={"ID":"f8f07875-72e1-4916-8d74-acfdf2db27f6","Type":"ContainerDied","Data":"989de08e628537c984db0a80f5a1b2b8dca3935a79e3ae5034b4d66e3b6c89ab"} Feb 19 14:51:11 crc kubenswrapper[4861]: W0219 14:51:11.890186 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb705df1a_4cae_4c89_af54_bbe87a267540.slice/crio-7bb4ef1ae9a585acca604a8b2a4af8ad2ecc8471f50b1bc88e7ed19c7a23383b WatchSource:0}: Error finding container 7bb4ef1ae9a585acca604a8b2a4af8ad2ecc8471f50b1bc88e7ed19c7a23383b: Status 404 returned error can't find the container with id 7bb4ef1ae9a585acca604a8b2a4af8ad2ecc8471f50b1bc88e7ed19c7a23383b Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.893204 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-wzzsh"] Feb 19 14:51:11 crc kubenswrapper[4861]: I0219 14:51:11.978166 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:51:11 crc kubenswrapper[4861]: E0219 14:51:11.978436 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:51:12 crc kubenswrapper[4861]: I0219 14:51:12.464715 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-tvz2m" Feb 19 14:51:12 crc kubenswrapper[4861]: I0219 14:51:12.502396 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" event={"ID":"f8f07875-72e1-4916-8d74-acfdf2db27f6","Type":"ContainerStarted","Data":"c2c148d28fbbba3b875454fd765ddddb61dd952076d9e7baf11f62951d10afd2"} Feb 19 14:51:12 crc kubenswrapper[4861]: I0219 14:51:12.505696 4861 generic.go:334] "Generic (PLEG): container finished" podID="b705df1a-4cae-4c89-af54-bbe87a267540" containerID="7f87fd4948c057b98d09513621d3f68e622f5994c440dc994fbbc20431cc84f8" exitCode=0 Feb 19 14:51:12 crc kubenswrapper[4861]: I0219 14:51:12.505759 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-wzzsh" event={"ID":"b705df1a-4cae-4c89-af54-bbe87a267540","Type":"ContainerDied","Data":"7f87fd4948c057b98d09513621d3f68e622f5994c440dc994fbbc20431cc84f8"} Feb 19 14:51:12 crc kubenswrapper[4861]: I0219 14:51:12.505830 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-wzzsh" event={"ID":"b705df1a-4cae-4c89-af54-bbe87a267540","Type":"ContainerStarted","Data":"7bb4ef1ae9a585acca604a8b2a4af8ad2ecc8471f50b1bc88e7ed19c7a23383b"} Feb 19 14:51:12 crc kubenswrapper[4861]: I0219 14:51:12.524243 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" podStartSLOduration=4.139240183 podStartE2EDuration="15.524223014s" podCreationTimestamp="2026-02-19 14:50:57 +0000 UTC" firstStartedPulling="2026-02-19 14:50:58.874629312 +0000 UTC m=+6073.535732580" lastFinishedPulling="2026-02-19 14:51:10.259612143 +0000 UTC m=+6084.920715411" observedRunningTime="2026-02-19 14:51:12.519108655 +0000 UTC m=+6087.180211883" watchObservedRunningTime="2026-02-19 14:51:12.524223014 +0000 UTC m=+6087.185326242" Feb 19 14:51:13 crc kubenswrapper[4861]: I0219 14:51:13.519369 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-wzzsh" event={"ID":"b705df1a-4cae-4c89-af54-bbe87a267540","Type":"ContainerStarted","Data":"373327c0beeb948c919e1787a530692b4ac95115b80692d95934777df99a091d"} Feb 19 14:51:13 crc kubenswrapper[4861]: I0219 14:51:13.543335 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-wzzsh" podStartSLOduration=2.543319509 podStartE2EDuration="2.543319509s" podCreationTimestamp="2026-02-19 14:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:51:13.541051387 +0000 UTC m=+6088.202154615" watchObservedRunningTime="2026-02-19 14:51:13.543319509 +0000 UTC m=+6088.204422737" Feb 19 14:51:17 crc kubenswrapper[4861]: I0219 14:51:17.567940 4861 generic.go:334] "Generic (PLEG): container finished" podID="b705df1a-4cae-4c89-af54-bbe87a267540" containerID="373327c0beeb948c919e1787a530692b4ac95115b80692d95934777df99a091d" exitCode=0 Feb 19 14:51:17 crc kubenswrapper[4861]: I0219 14:51:17.568032 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-wzzsh" event={"ID":"b705df1a-4cae-4c89-af54-bbe87a267540","Type":"ContainerDied","Data":"373327c0beeb948c919e1787a530692b4ac95115b80692d95934777df99a091d"} Feb 19 14:51:18 crc kubenswrapper[4861]: I0219 14:51:18.252095 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:51:18 crc kubenswrapper[4861]: I0219 14:51:18.315238 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-854dbc447d-kqb5w" Feb 19 14:51:18 crc kubenswrapper[4861]: I0219 14:51:18.401363 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-776b749dcb-txl2h"] Feb 19 14:51:18 crc kubenswrapper[4861]: I0219 14:51:18.401654 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-776b749dcb-txl2h" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api" containerID="cri-o://c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833" gracePeriod=30 Feb 19 14:51:18 crc kubenswrapper[4861]: I0219 14:51:18.402159 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-776b749dcb-txl2h" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api-provider-agent" containerID="cri-o://4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458" gracePeriod=30 Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.060283 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.162826 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b705df1a-4cae-4c89-af54-bbe87a267540-config-data-merged\") pod \"b705df1a-4cae-4c89-af54-bbe87a267540\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.163144 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-config-data\") pod \"b705df1a-4cae-4c89-af54-bbe87a267540\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.163249 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-combined-ca-bundle\") pod \"b705df1a-4cae-4c89-af54-bbe87a267540\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.163496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-scripts\") pod \"b705df1a-4cae-4c89-af54-bbe87a267540\" (UID: \"b705df1a-4cae-4c89-af54-bbe87a267540\") " Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.169062 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-scripts" (OuterVolumeSpecName: "scripts") pod "b705df1a-4cae-4c89-af54-bbe87a267540" (UID: "b705df1a-4cae-4c89-af54-bbe87a267540"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.172194 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-config-data" (OuterVolumeSpecName: "config-data") pod "b705df1a-4cae-4c89-af54-bbe87a267540" (UID: "b705df1a-4cae-4c89-af54-bbe87a267540"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.188124 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b705df1a-4cae-4c89-af54-bbe87a267540-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "b705df1a-4cae-4c89-af54-bbe87a267540" (UID: "b705df1a-4cae-4c89-af54-bbe87a267540"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.206390 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b705df1a-4cae-4c89-af54-bbe87a267540" (UID: "b705df1a-4cae-4c89-af54-bbe87a267540"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.265661 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b705df1a-4cae-4c89-af54-bbe87a267540-config-data-merged\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.266242 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.266301 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.266661 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b705df1a-4cae-4c89-af54-bbe87a267540-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.589764 4861 generic.go:334] "Generic (PLEG): container finished" podID="88850841-3500-4518-a1c7-167bb9afe85b" containerID="4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458" exitCode=0 Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.589813 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerDied","Data":"4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458"} Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.591669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-wzzsh" event={"ID":"b705df1a-4cae-4c89-af54-bbe87a267540","Type":"ContainerDied","Data":"7bb4ef1ae9a585acca604a8b2a4af8ad2ecc8471f50b1bc88e7ed19c7a23383b"} Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.591688 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb4ef1ae9a585acca604a8b2a4af8ad2ecc8471f50b1bc88e7ed19c7a23383b" Feb 19 14:51:19 crc kubenswrapper[4861]: I0219 14:51:19.591774 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-wzzsh" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.496244 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541108 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-config-data-merged\") pod \"88850841-3500-4518-a1c7-167bb9afe85b\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541199 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-ovndb-tls-certs\") pod \"88850841-3500-4518-a1c7-167bb9afe85b\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541253 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-config-data\") pod \"88850841-3500-4518-a1c7-167bb9afe85b\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541282 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-combined-ca-bundle\") pod \"88850841-3500-4518-a1c7-167bb9afe85b\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541359 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-octavia-run\") pod \"88850841-3500-4518-a1c7-167bb9afe85b\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541435 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-scripts\") pod \"88850841-3500-4518-a1c7-167bb9afe85b\" (UID: \"88850841-3500-4518-a1c7-167bb9afe85b\") " Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.541751 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "88850841-3500-4518-a1c7-167bb9afe85b" (UID: "88850841-3500-4518-a1c7-167bb9afe85b"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.542031 4861 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-octavia-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.549833 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-config-data" (OuterVolumeSpecName: "config-data") pod "88850841-3500-4518-a1c7-167bb9afe85b" (UID: "88850841-3500-4518-a1c7-167bb9afe85b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.549870 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-scripts" (OuterVolumeSpecName: "scripts") pod "88850841-3500-4518-a1c7-167bb9afe85b" (UID: "88850841-3500-4518-a1c7-167bb9afe85b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.610800 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88850841-3500-4518-a1c7-167bb9afe85b" (UID: "88850841-3500-4518-a1c7-167bb9afe85b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.642762 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "88850841-3500-4518-a1c7-167bb9afe85b" (UID: "88850841-3500-4518-a1c7-167bb9afe85b"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.645966 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646018 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/88850841-3500-4518-a1c7-167bb9afe85b-config-data-merged\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646040 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646072 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646618 4861 generic.go:334] "Generic (PLEG): container finished" podID="88850841-3500-4518-a1c7-167bb9afe85b" containerID="c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833" exitCode=0 Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646672 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerDied","Data":"c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833"} Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646706 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-776b749dcb-txl2h" event={"ID":"88850841-3500-4518-a1c7-167bb9afe85b","Type":"ContainerDied","Data":"85de0d7659f8b2a28f557a035a1484f3e3ee5c425d3465aa95ad29ec5e2697d0"} Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.646744 4861 scope.go:117] "RemoveContainer" containerID="4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.647073 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-776b749dcb-txl2h" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.752585 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "88850841-3500-4518-a1c7-167bb9afe85b" (UID: "88850841-3500-4518-a1c7-167bb9afe85b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.845379 4861 scope.go:117] "RemoveContainer" containerID="c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.851198 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88850841-3500-4518-a1c7-167bb9afe85b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.875030 4861 scope.go:117] "RemoveContainer" containerID="62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.907939 4861 scope.go:117] "RemoveContainer" containerID="4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458" Feb 19 14:51:22 crc kubenswrapper[4861]: E0219 14:51:22.908757 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458\": container with ID starting with 4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458 not found: ID does not exist" containerID="4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.908804 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458"} err="failed to get container status \"4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458\": rpc error: code = NotFound desc = could not find container \"4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458\": container with ID starting with 4d2852c5c16cc7e3f66e1b8a05cd796ab2bb78784bda749d8e2132a040016458 not found: ID does not exist" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.908833 4861 scope.go:117] "RemoveContainer" containerID="c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833" Feb 19 14:51:22 crc kubenswrapper[4861]: E0219 14:51:22.909281 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833\": container with ID starting with c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833 not found: ID does not exist" containerID="c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.909344 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833"} err="failed to get container status \"c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833\": rpc error: code = NotFound desc = could not find container \"c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833\": container with ID starting with c2a842754d22d2dac2cc1ea3daf9dbba2f4a787a29811eb32098cb918ad44833 not found: ID does not exist" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.909384 4861 scope.go:117] "RemoveContainer" containerID="62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073" Feb 19 14:51:22 crc kubenswrapper[4861]: E0219 14:51:22.909787 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073\": container with ID starting with 62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073 not found: ID does not exist" containerID="62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.909821 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073"} err="failed to get container status \"62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073\": rpc error: code = NotFound desc = could not find container \"62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073\": container with ID starting with 62f4e63ace08a2041255ec3736aba8de3dde0bdda7695a074eaaa6cca4ecd073 not found: ID does not exist" Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.987854 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-776b749dcb-txl2h"] Feb 19 14:51:22 crc kubenswrapper[4861]: I0219 14:51:22.998241 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-776b749dcb-txl2h"] Feb 19 14:51:23 crc kubenswrapper[4861]: I0219 14:51:23.998697 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88850841-3500-4518-a1c7-167bb9afe85b" path="/var/lib/kubelet/pods/88850841-3500-4518-a1c7-167bb9afe85b/volumes" Feb 19 14:51:25 crc kubenswrapper[4861]: I0219 14:51:25.982120 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:51:25 crc kubenswrapper[4861]: E0219 14:51:25.982621 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.583187 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jw4nd"] Feb 19 14:51:34 crc kubenswrapper[4861]: E0219 14:51:34.584411 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b705df1a-4cae-4c89-af54-bbe87a267540" containerName="octavia-db-sync" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584456 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b705df1a-4cae-4c89-af54-bbe87a267540" containerName="octavia-db-sync" Feb 19 14:51:34 crc kubenswrapper[4861]: E0219 14:51:34.584486 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api-provider-agent" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584500 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api-provider-agent" Feb 19 14:51:34 crc kubenswrapper[4861]: E0219 14:51:34.584522 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="init" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584534 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="init" Feb 19 14:51:34 crc kubenswrapper[4861]: E0219 14:51:34.584562 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584575 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api" Feb 19 14:51:34 crc kubenswrapper[4861]: E0219 14:51:34.584617 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b705df1a-4cae-4c89-af54-bbe87a267540" containerName="init" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584629 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b705df1a-4cae-4c89-af54-bbe87a267540" containerName="init" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584940 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584963 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="88850841-3500-4518-a1c7-167bb9afe85b" containerName="octavia-api-provider-agent" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.584997 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b705df1a-4cae-4c89-af54-bbe87a267540" containerName="octavia-db-sync" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.587462 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.606508 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw4nd"] Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.727807 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-catalog-content\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.727862 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k87m\" (UniqueName: \"kubernetes.io/projected/5a38f761-6e79-42f4-ad59-99374389cf4d-kube-api-access-4k87m\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.728217 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-utilities\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.830409 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-catalog-content\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.830486 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k87m\" (UniqueName: \"kubernetes.io/projected/5a38f761-6e79-42f4-ad59-99374389cf4d-kube-api-access-4k87m\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.830579 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-utilities\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.830987 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-utilities\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.830993 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-catalog-content\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.860291 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k87m\" (UniqueName: \"kubernetes.io/projected/5a38f761-6e79-42f4-ad59-99374389cf4d-kube-api-access-4k87m\") pod \"redhat-marketplace-jw4nd\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:34 crc kubenswrapper[4861]: I0219 14:51:34.911355 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:35 crc kubenswrapper[4861]: I0219 14:51:35.463109 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw4nd"] Feb 19 14:51:35 crc kubenswrapper[4861]: I0219 14:51:35.803081 4861 generic.go:334] "Generic (PLEG): container finished" podID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerID="b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad" exitCode=0 Feb 19 14:51:35 crc kubenswrapper[4861]: I0219 14:51:35.803287 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerDied","Data":"b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad"} Feb 19 14:51:35 crc kubenswrapper[4861]: I0219 14:51:35.803357 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerStarted","Data":"d24ef8f2c16d186b330e8368b805c20442f042385373ad954759a2a44c75d505"} Feb 19 14:51:36 crc kubenswrapper[4861]: I0219 14:51:36.816293 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerStarted","Data":"149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1"} Feb 19 14:51:37 crc kubenswrapper[4861]: I0219 14:51:37.839191 4861 generic.go:334] "Generic (PLEG): container finished" podID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerID="149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1" exitCode=0 Feb 19 14:51:37 crc kubenswrapper[4861]: I0219 14:51:37.839296 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerDied","Data":"149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1"} Feb 19 14:51:37 crc kubenswrapper[4861]: I0219 14:51:37.977584 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:51:37 crc kubenswrapper[4861]: E0219 14:51:37.977942 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:51:38 crc kubenswrapper[4861]: I0219 14:51:38.856811 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerStarted","Data":"8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35"} Feb 19 14:51:38 crc kubenswrapper[4861]: I0219 14:51:38.902195 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jw4nd" podStartSLOduration=2.383068956 podStartE2EDuration="4.902167964s" podCreationTimestamp="2026-02-19 14:51:34 +0000 UTC" firstStartedPulling="2026-02-19 14:51:35.804735507 +0000 UTC m=+6110.465838735" lastFinishedPulling="2026-02-19 14:51:38.323834485 +0000 UTC m=+6112.984937743" observedRunningTime="2026-02-19 14:51:38.881111185 +0000 UTC m=+6113.542214473" watchObservedRunningTime="2026-02-19 14:51:38.902167964 +0000 UTC m=+6113.563271232" Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.459088 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dmhkw"] Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.460146 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerName="octavia-amphora-httpd" containerID="cri-o://c2c148d28fbbba3b875454fd765ddddb61dd952076d9e7baf11f62951d10afd2" gracePeriod=30 Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.914250 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.915545 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.932601 4861 generic.go:334] "Generic (PLEG): container finished" podID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerID="c2c148d28fbbba3b875454fd765ddddb61dd952076d9e7baf11f62951d10afd2" exitCode=0 Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.932667 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" event={"ID":"f8f07875-72e1-4916-8d74-acfdf2db27f6","Type":"ContainerDied","Data":"c2c148d28fbbba3b875454fd765ddddb61dd952076d9e7baf11f62951d10afd2"} Feb 19 14:51:44 crc kubenswrapper[4861]: I0219 14:51:44.975343 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.089139 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.261673 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8f07875-72e1-4916-8d74-acfdf2db27f6-httpd-config\") pod \"f8f07875-72e1-4916-8d74-acfdf2db27f6\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.261754 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8f07875-72e1-4916-8d74-acfdf2db27f6-amphora-image\") pod \"f8f07875-72e1-4916-8d74-acfdf2db27f6\" (UID: \"f8f07875-72e1-4916-8d74-acfdf2db27f6\") " Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.286566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f07875-72e1-4916-8d74-acfdf2db27f6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f8f07875-72e1-4916-8d74-acfdf2db27f6" (UID: "f8f07875-72e1-4916-8d74-acfdf2db27f6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.346487 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f07875-72e1-4916-8d74-acfdf2db27f6-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "f8f07875-72e1-4916-8d74-acfdf2db27f6" (UID: "f8f07875-72e1-4916-8d74-acfdf2db27f6"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.363915 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8f07875-72e1-4916-8d74-acfdf2db27f6-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.363951 4861 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8f07875-72e1-4916-8d74-acfdf2db27f6-amphora-image\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.945020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" event={"ID":"f8f07875-72e1-4916-8d74-acfdf2db27f6","Type":"ContainerDied","Data":"e30caa7ef39904c7941d06a7cf19862d1f577b65a9e0a6d9e891e29cdb4ac185"} Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.945071 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-dmhkw" Feb 19 14:51:45 crc kubenswrapper[4861]: I0219 14:51:45.945686 4861 scope.go:117] "RemoveContainer" containerID="c2c148d28fbbba3b875454fd765ddddb61dd952076d9e7baf11f62951d10afd2" Feb 19 14:51:46 crc kubenswrapper[4861]: I0219 14:51:46.000613 4861 scope.go:117] "RemoveContainer" containerID="989de08e628537c984db0a80f5a1b2b8dca3935a79e3ae5034b4d66e3b6c89ab" Feb 19 14:51:46 crc kubenswrapper[4861]: I0219 14:51:46.003195 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dmhkw"] Feb 19 14:51:46 crc kubenswrapper[4861]: I0219 14:51:46.014385 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dmhkw"] Feb 19 14:51:46 crc kubenswrapper[4861]: I0219 14:51:46.028065 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:46 crc kubenswrapper[4861]: I0219 14:51:46.090246 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw4nd"] Feb 19 14:51:47 crc kubenswrapper[4861]: I0219 14:51:47.974103 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jw4nd" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="registry-server" containerID="cri-o://8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35" gracePeriod=2 Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.001389 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" path="/var/lib/kubelet/pods/f8f07875-72e1-4916-8d74-acfdf2db27f6/volumes" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.502160 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.634317 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-utilities\") pod \"5a38f761-6e79-42f4-ad59-99374389cf4d\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.634376 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k87m\" (UniqueName: \"kubernetes.io/projected/5a38f761-6e79-42f4-ad59-99374389cf4d-kube-api-access-4k87m\") pod \"5a38f761-6e79-42f4-ad59-99374389cf4d\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.634524 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-catalog-content\") pod \"5a38f761-6e79-42f4-ad59-99374389cf4d\" (UID: \"5a38f761-6e79-42f4-ad59-99374389cf4d\") " Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.640915 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-utilities" (OuterVolumeSpecName: "utilities") pod "5a38f761-6e79-42f4-ad59-99374389cf4d" (UID: "5a38f761-6e79-42f4-ad59-99374389cf4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.647822 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a38f761-6e79-42f4-ad59-99374389cf4d-kube-api-access-4k87m" (OuterVolumeSpecName: "kube-api-access-4k87m") pod "5a38f761-6e79-42f4-ad59-99374389cf4d" (UID: "5a38f761-6e79-42f4-ad59-99374389cf4d"). InnerVolumeSpecName "kube-api-access-4k87m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.655950 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a38f761-6e79-42f4-ad59-99374389cf4d" (UID: "5a38f761-6e79-42f4-ad59-99374389cf4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.736388 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.736440 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a38f761-6e79-42f4-ad59-99374389cf4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.736452 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k87m\" (UniqueName: \"kubernetes.io/projected/5a38f761-6e79-42f4-ad59-99374389cf4d-kube-api-access-4k87m\") on node \"crc\" DevicePath \"\"" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.995445 4861 generic.go:334] "Generic (PLEG): container finished" podID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerID="8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35" exitCode=0 Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.995498 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerDied","Data":"8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35"} Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.995529 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw4nd" event={"ID":"5a38f761-6e79-42f4-ad59-99374389cf4d","Type":"ContainerDied","Data":"d24ef8f2c16d186b330e8368b805c20442f042385373ad954759a2a44c75d505"} Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.995555 4861 scope.go:117] "RemoveContainer" containerID="8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35" Feb 19 14:51:48 crc kubenswrapper[4861]: I0219 14:51:48.995850 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw4nd" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.025567 4861 scope.go:117] "RemoveContainer" containerID="149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.059484 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw4nd"] Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.068761 4861 scope.go:117] "RemoveContainer" containerID="b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.078056 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw4nd"] Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.107738 4861 scope.go:117] "RemoveContainer" containerID="8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35" Feb 19 14:51:49 crc kubenswrapper[4861]: E0219 14:51:49.108090 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35\": container with ID starting with 8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35 not found: ID does not exist" containerID="8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.108119 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35"} err="failed to get container status \"8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35\": rpc error: code = NotFound desc = could not find container \"8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35\": container with ID starting with 8867cf0dc45dfe60470a9f4b788bd9596331770bdf1e0263fc31e604e7451a35 not found: ID does not exist" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.108140 4861 scope.go:117] "RemoveContainer" containerID="149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1" Feb 19 14:51:49 crc kubenswrapper[4861]: E0219 14:51:49.108379 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1\": container with ID starting with 149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1 not found: ID does not exist" containerID="149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.108401 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1"} err="failed to get container status \"149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1\": rpc error: code = NotFound desc = could not find container \"149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1\": container with ID starting with 149da6927376340fe5dfe12232fa3676512bf977e51a350177ae1813d97d55e1 not found: ID does not exist" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.108431 4861 scope.go:117] "RemoveContainer" containerID="b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad" Feb 19 14:51:49 crc kubenswrapper[4861]: E0219 14:51:49.108689 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad\": container with ID starting with b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad not found: ID does not exist" containerID="b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.108717 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad"} err="failed to get container status \"b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad\": rpc error: code = NotFound desc = could not find container \"b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad\": container with ID starting with b790b9286ff768f4264dbb441e182ad394e58f9b4a330e06fbac7fa1f46835ad not found: ID does not exist" Feb 19 14:51:49 crc kubenswrapper[4861]: I0219 14:51:49.986623 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" path="/var/lib/kubelet/pods/5a38f761-6e79-42f4-ad59-99374389cf4d/volumes" Feb 19 14:51:52 crc kubenswrapper[4861]: I0219 14:51:52.977142 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:51:52 crc kubenswrapper[4861]: E0219 14:51:52.977767 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.183765 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-lctzt"] Feb 19 14:51:53 crc kubenswrapper[4861]: E0219 14:51:53.184250 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerName="octavia-amphora-httpd" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184270 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerName="octavia-amphora-httpd" Feb 19 14:51:53 crc kubenswrapper[4861]: E0219 14:51:53.184285 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="extract-utilities" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184292 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="extract-utilities" Feb 19 14:51:53 crc kubenswrapper[4861]: E0219 14:51:53.184307 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerName="init" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184313 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerName="init" Feb 19 14:51:53 crc kubenswrapper[4861]: E0219 14:51:53.184330 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="extract-content" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184337 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="extract-content" Feb 19 14:51:53 crc kubenswrapper[4861]: E0219 14:51:53.184345 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="registry-server" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184350 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="registry-server" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184712 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f07875-72e1-4916-8d74-acfdf2db27f6" containerName="octavia-amphora-httpd" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.184736 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a38f761-6e79-42f4-ad59-99374389cf4d" containerName="registry-server" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.185927 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.190218 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.199443 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-lctzt"] Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.356445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/9836a85c-3b94-4737-97bd-8e16c62a23fa-amphora-image\") pod \"octavia-image-upload-8d4564f8f-lctzt\" (UID: \"9836a85c-3b94-4737-97bd-8e16c62a23fa\") " pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.356892 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9836a85c-3b94-4737-97bd-8e16c62a23fa-httpd-config\") pod \"octavia-image-upload-8d4564f8f-lctzt\" (UID: \"9836a85c-3b94-4737-97bd-8e16c62a23fa\") " pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.459020 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9836a85c-3b94-4737-97bd-8e16c62a23fa-httpd-config\") pod \"octavia-image-upload-8d4564f8f-lctzt\" (UID: \"9836a85c-3b94-4737-97bd-8e16c62a23fa\") " pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.459389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/9836a85c-3b94-4737-97bd-8e16c62a23fa-amphora-image\") pod \"octavia-image-upload-8d4564f8f-lctzt\" (UID: \"9836a85c-3b94-4737-97bd-8e16c62a23fa\") " pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.459884 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/9836a85c-3b94-4737-97bd-8e16c62a23fa-amphora-image\") pod \"octavia-image-upload-8d4564f8f-lctzt\" (UID: \"9836a85c-3b94-4737-97bd-8e16c62a23fa\") " pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.464723 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9836a85c-3b94-4737-97bd-8e16c62a23fa-httpd-config\") pod \"octavia-image-upload-8d4564f8f-lctzt\" (UID: \"9836a85c-3b94-4737-97bd-8e16c62a23fa\") " pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:53 crc kubenswrapper[4861]: I0219 14:51:53.523952 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" Feb 19 14:51:54 crc kubenswrapper[4861]: I0219 14:51:54.001882 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-lctzt"] Feb 19 14:51:54 crc kubenswrapper[4861]: I0219 14:51:54.051530 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" event={"ID":"9836a85c-3b94-4737-97bd-8e16c62a23fa","Type":"ContainerStarted","Data":"87bc6660ae69229ae4f2c32ce0fdf387fe1aa7514bae98644f8cd0c67634a7a5"} Feb 19 14:51:55 crc kubenswrapper[4861]: I0219 14:51:55.066691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" event={"ID":"9836a85c-3b94-4737-97bd-8e16c62a23fa","Type":"ContainerStarted","Data":"1200380efb4bf8b946f09d7a5fe640e0a630869436680d8669b7c2ad31c77b52"} Feb 19 14:51:56 crc kubenswrapper[4861]: I0219 14:51:56.079724 4861 generic.go:334] "Generic (PLEG): container finished" podID="9836a85c-3b94-4737-97bd-8e16c62a23fa" containerID="1200380efb4bf8b946f09d7a5fe640e0a630869436680d8669b7c2ad31c77b52" exitCode=0 Feb 19 14:51:56 crc kubenswrapper[4861]: I0219 14:51:56.079766 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" event={"ID":"9836a85c-3b94-4737-97bd-8e16c62a23fa","Type":"ContainerDied","Data":"1200380efb4bf8b946f09d7a5fe640e0a630869436680d8669b7c2ad31c77b52"} Feb 19 14:51:57 crc kubenswrapper[4861]: I0219 14:51:57.093908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" event={"ID":"9836a85c-3b94-4737-97bd-8e16c62a23fa","Type":"ContainerStarted","Data":"824d7a5ab5c105ced84fb97fb4e878948e5a8aae69ceec7595285502d9ce5dea"} Feb 19 14:51:57 crc kubenswrapper[4861]: I0219 14:51:57.115661 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-8d4564f8f-lctzt" podStartSLOduration=3.6631822 podStartE2EDuration="4.115626821s" podCreationTimestamp="2026-02-19 14:51:53 +0000 UTC" firstStartedPulling="2026-02-19 14:51:54.031093622 +0000 UTC m=+6128.692196870" lastFinishedPulling="2026-02-19 14:51:54.483538253 +0000 UTC m=+6129.144641491" observedRunningTime="2026-02-19 14:51:57.111240722 +0000 UTC m=+6131.772343990" watchObservedRunningTime="2026-02-19 14:51:57.115626821 +0000 UTC m=+6131.776730089" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.017349 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-4sdp8"] Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.020044 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.025127 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.025811 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.026544 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.047090 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4sdp8"] Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.193310 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-amphora-certs\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.193363 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-combined-ca-bundle\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.195536 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1fe20438-68f6-481f-9699-d752ab537d28-hm-ports\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.195696 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1fe20438-68f6-481f-9699-d752ab537d28-config-data-merged\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.195801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-config-data\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.195867 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-scripts\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.297667 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1fe20438-68f6-481f-9699-d752ab537d28-hm-ports\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.297810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1fe20438-68f6-481f-9699-d752ab537d28-config-data-merged\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.297903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-config-data\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.297954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-scripts\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.298118 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-amphora-certs\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.298153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-combined-ca-bundle\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.299223 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1fe20438-68f6-481f-9699-d752ab537d28-config-data-merged\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.299760 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1fe20438-68f6-481f-9699-d752ab537d28-hm-ports\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.306124 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-scripts\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.311007 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-amphora-certs\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.312360 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-config-data\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.313490 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe20438-68f6-481f-9699-d752ab537d28-combined-ca-bundle\") pod \"octavia-healthmanager-4sdp8\" (UID: \"1fe20438-68f6-481f-9699-d752ab537d28\") " pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:02 crc kubenswrapper[4861]: I0219 14:52:02.349834 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.017289 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4sdp8"] Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.169668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4sdp8" event={"ID":"1fe20438-68f6-481f-9699-d752ab537d28","Type":"ContainerStarted","Data":"680562b4e4738f03e1f53c3ba94c9088ebb4389d42daa5ebddbc99e321834950"} Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.746365 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-v98v6"] Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.748971 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.751793 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.752641 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.761535 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-v98v6"] Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.942734 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-combined-ca-bundle\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.943144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f6730e03-5e73-4026-865a-c2ca618f8cd4-hm-ports\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.943194 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-amphora-certs\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.943458 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f6730e03-5e73-4026-865a-c2ca618f8cd4-config-data-merged\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.943512 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-config-data\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:03 crc kubenswrapper[4861]: I0219 14:52:03.943686 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-scripts\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.046366 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f6730e03-5e73-4026-865a-c2ca618f8cd4-config-data-merged\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.046448 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-config-data\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.046541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-scripts\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.046703 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-combined-ca-bundle\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.046861 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f6730e03-5e73-4026-865a-c2ca618f8cd4-config-data-merged\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.047080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f6730e03-5e73-4026-865a-c2ca618f8cd4-hm-ports\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.048576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f6730e03-5e73-4026-865a-c2ca618f8cd4-hm-ports\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.048693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-amphora-certs\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.053283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-combined-ca-bundle\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.053329 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-scripts\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.054436 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-config-data\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.056176 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f6730e03-5e73-4026-865a-c2ca618f8cd4-amphora-certs\") pod \"octavia-housekeeping-v98v6\" (UID: \"f6730e03-5e73-4026-865a-c2ca618f8cd4\") " pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.077840 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.192238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4sdp8" event={"ID":"1fe20438-68f6-481f-9699-d752ab537d28","Type":"ContainerStarted","Data":"700da916cf6c5a1148469f0c8ee3b2c7a83ae31b17c52ef050d3a14c08c49b1a"} Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.735098 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-v98v6"] Feb 19 14:52:04 crc kubenswrapper[4861]: I0219 14:52:04.976905 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:52:04 crc kubenswrapper[4861]: E0219 14:52:04.977144 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.208167 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-v98v6" event={"ID":"f6730e03-5e73-4026-865a-c2ca618f8cd4","Type":"ContainerStarted","Data":"7e82b7a77e166aa37d833cde0da89ece88ce8ea89ad1f239a80f18bf3fe75c4d"} Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.211150 4861 generic.go:334] "Generic (PLEG): container finished" podID="1fe20438-68f6-481f-9699-d752ab537d28" containerID="700da916cf6c5a1148469f0c8ee3b2c7a83ae31b17c52ef050d3a14c08c49b1a" exitCode=0 Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.211187 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4sdp8" event={"ID":"1fe20438-68f6-481f-9699-d752ab537d28","Type":"ContainerDied","Data":"700da916cf6c5a1148469f0c8ee3b2c7a83ae31b17c52ef050d3a14c08c49b1a"} Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.564075 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-vjbx2"] Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.566239 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.573004 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.573182 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.591104 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-vjbx2"] Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.690164 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-amphora-certs\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.690213 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/cdcf2056-7111-48d8-a8b7-e5901babe337-hm-ports\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.690242 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-scripts\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.690292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cdcf2056-7111-48d8-a8b7-e5901babe337-config-data-merged\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.690308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-combined-ca-bundle\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.690337 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-config-data\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.792811 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cdcf2056-7111-48d8-a8b7-e5901babe337-config-data-merged\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.792856 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-combined-ca-bundle\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.792902 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-config-data\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.793513 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cdcf2056-7111-48d8-a8b7-e5901babe337-config-data-merged\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.793788 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-amphora-certs\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.793832 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/cdcf2056-7111-48d8-a8b7-e5901babe337-hm-ports\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.793866 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-scripts\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.795947 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/cdcf2056-7111-48d8-a8b7-e5901babe337-hm-ports\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.800034 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-combined-ca-bundle\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.800397 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-scripts\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.802924 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-amphora-certs\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.805257 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcf2056-7111-48d8-a8b7-e5901babe337-config-data\") pod \"octavia-worker-vjbx2\" (UID: \"cdcf2056-7111-48d8-a8b7-e5901babe337\") " pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:05 crc kubenswrapper[4861]: I0219 14:52:05.896448 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:06 crc kubenswrapper[4861]: I0219 14:52:06.222810 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4sdp8" event={"ID":"1fe20438-68f6-481f-9699-d752ab537d28","Type":"ContainerStarted","Data":"e4febc9500bd71070d8e279d37991e42886b99d666de5ffe64fe4a90febc13ea"} Feb 19 14:52:06 crc kubenswrapper[4861]: I0219 14:52:06.223198 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:06 crc kubenswrapper[4861]: I0219 14:52:06.246438 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-4sdp8" podStartSLOduration=5.246405063 podStartE2EDuration="5.246405063s" podCreationTimestamp="2026-02-19 14:52:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:52:06.243752522 +0000 UTC m=+6140.904855750" watchObservedRunningTime="2026-02-19 14:52:06.246405063 +0000 UTC m=+6140.907508291" Feb 19 14:52:06 crc kubenswrapper[4861]: I0219 14:52:06.492694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-vjbx2"] Feb 19 14:52:07 crc kubenswrapper[4861]: I0219 14:52:07.156100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4sdp8"] Feb 19 14:52:07 crc kubenswrapper[4861]: I0219 14:52:07.235464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vjbx2" event={"ID":"cdcf2056-7111-48d8-a8b7-e5901babe337","Type":"ContainerStarted","Data":"c878f8f3a0d2eb162d1ba95343252e2a04bf72ff7c8603c67012879ad8487313"} Feb 19 14:52:08 crc kubenswrapper[4861]: I0219 14:52:08.259881 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-v98v6" event={"ID":"f6730e03-5e73-4026-865a-c2ca618f8cd4","Type":"ContainerStarted","Data":"08380e79e04e214a4ded0d4a399f79f42b95393ffb15085e33d9d7dbbc2cb98b"} Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.057052 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-57b5-account-create-update-rg69w"] Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.075495 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-qm9t5"] Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.086539 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-57b5-account-create-update-rg69w"] Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.092461 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-qm9t5"] Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.270235 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vjbx2" event={"ID":"cdcf2056-7111-48d8-a8b7-e5901babe337","Type":"ContainerStarted","Data":"85e806d165da0ac335760443afb165ef614e78ff53bf24f8fcc69d3d7fe29d90"} Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.273580 4861 generic.go:334] "Generic (PLEG): container finished" podID="f6730e03-5e73-4026-865a-c2ca618f8cd4" containerID="08380e79e04e214a4ded0d4a399f79f42b95393ffb15085e33d9d7dbbc2cb98b" exitCode=0 Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.273622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-v98v6" event={"ID":"f6730e03-5e73-4026-865a-c2ca618f8cd4","Type":"ContainerDied","Data":"08380e79e04e214a4ded0d4a399f79f42b95393ffb15085e33d9d7dbbc2cb98b"} Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.998185 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e02f475-200d-492f-8ca6-b8848de71272" path="/var/lib/kubelet/pods/4e02f475-200d-492f-8ca6-b8848de71272/volumes" Feb 19 14:52:09 crc kubenswrapper[4861]: I0219 14:52:09.999788 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3dabfb1-c198-4e76-a1d4-fe8f895e48bf" path="/var/lib/kubelet/pods/d3dabfb1-c198-4e76-a1d4-fe8f895e48bf/volumes" Feb 19 14:52:10 crc kubenswrapper[4861]: I0219 14:52:10.284632 4861 generic.go:334] "Generic (PLEG): container finished" podID="cdcf2056-7111-48d8-a8b7-e5901babe337" containerID="85e806d165da0ac335760443afb165ef614e78ff53bf24f8fcc69d3d7fe29d90" exitCode=0 Feb 19 14:52:10 crc kubenswrapper[4861]: I0219 14:52:10.284711 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vjbx2" event={"ID":"cdcf2056-7111-48d8-a8b7-e5901babe337","Type":"ContainerDied","Data":"85e806d165da0ac335760443afb165ef614e78ff53bf24f8fcc69d3d7fe29d90"} Feb 19 14:52:10 crc kubenswrapper[4861]: I0219 14:52:10.288815 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-v98v6" event={"ID":"f6730e03-5e73-4026-865a-c2ca618f8cd4","Type":"ContainerStarted","Data":"2fa5f4dd2597aee3e91a181e33842f8b4126f82e8ecb60bc283418eac4e2806b"} Feb 19 14:52:10 crc kubenswrapper[4861]: I0219 14:52:10.288995 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:10 crc kubenswrapper[4861]: I0219 14:52:10.343107 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-v98v6" podStartSLOduration=5.507126838 podStartE2EDuration="7.343084139s" podCreationTimestamp="2026-02-19 14:52:03 +0000 UTC" firstStartedPulling="2026-02-19 14:52:04.708141506 +0000 UTC m=+6139.369244734" lastFinishedPulling="2026-02-19 14:52:06.544098767 +0000 UTC m=+6141.205202035" observedRunningTime="2026-02-19 14:52:10.335645078 +0000 UTC m=+6144.996748306" watchObservedRunningTime="2026-02-19 14:52:10.343084139 +0000 UTC m=+6145.004187377" Feb 19 14:52:11 crc kubenswrapper[4861]: I0219 14:52:11.306112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vjbx2" event={"ID":"cdcf2056-7111-48d8-a8b7-e5901babe337","Type":"ContainerStarted","Data":"a91b91857834900cc3fcbcbd9be4ff1dc13094dda8d14d1800ac3e01bd193dc9"} Feb 19 14:52:11 crc kubenswrapper[4861]: I0219 14:52:11.342656 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-vjbx2" podStartSLOduration=4.424177139 podStartE2EDuration="6.342629797s" podCreationTimestamp="2026-02-19 14:52:05 +0000 UTC" firstStartedPulling="2026-02-19 14:52:06.531368084 +0000 UTC m=+6141.192471312" lastFinishedPulling="2026-02-19 14:52:08.449820742 +0000 UTC m=+6143.110923970" observedRunningTime="2026-02-19 14:52:11.327410286 +0000 UTC m=+6145.988513514" watchObservedRunningTime="2026-02-19 14:52:11.342629797 +0000 UTC m=+6146.003733035" Feb 19 14:52:12 crc kubenswrapper[4861]: I0219 14:52:12.320857 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:15 crc kubenswrapper[4861]: I0219 14:52:15.990508 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:52:15 crc kubenswrapper[4861]: E0219 14:52:15.991915 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:52:16 crc kubenswrapper[4861]: I0219 14:52:16.046398 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-n6pq7"] Feb 19 14:52:16 crc kubenswrapper[4861]: I0219 14:52:16.056276 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-n6pq7"] Feb 19 14:52:17 crc kubenswrapper[4861]: I0219 14:52:17.397290 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-4sdp8" Feb 19 14:52:17 crc kubenswrapper[4861]: I0219 14:52:17.997900 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f" path="/var/lib/kubelet/pods/4be47d0f-8a92-4dc1-a7b5-f26fdb2e799f/volumes" Feb 19 14:52:19 crc kubenswrapper[4861]: I0219 14:52:19.132771 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-v98v6" Feb 19 14:52:20 crc kubenswrapper[4861]: I0219 14:52:20.927160 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-vjbx2" Feb 19 14:52:26 crc kubenswrapper[4861]: I0219 14:52:26.979065 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:52:26 crc kubenswrapper[4861]: E0219 14:52:26.979947 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.963690 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7454d999d9-8tqk5"] Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.966676 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.972850 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.973270 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.973610 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-zqpbv" Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.973900 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 14:52:34 crc kubenswrapper[4861]: I0219 14:52:34.980484 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7454d999d9-8tqk5"] Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.016058 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-config-data\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.016131 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-scripts\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.016153 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4165abed-a9b8-43c9-80c1-42ae474be304-horizon-secret-key\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.016182 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4165abed-a9b8-43c9-80c1-42ae474be304-logs\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.016235 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4vk\" (UniqueName: \"kubernetes.io/projected/4165abed-a9b8-43c9-80c1-42ae474be304-kube-api-access-jk4vk\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.040679 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.041308 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-httpd" containerID="cri-o://3abef79a76ef88b984f328555284e34f3b9e552293c9b1184be99ecfdd4fc45f" gracePeriod=30 Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.040960 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-log" containerID="cri-o://05ac8e3ded612160ff753b5e1d6f51b9868a7db4c6ce3e6132674a1bf7f7bb3f" gracePeriod=30 Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.093925 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.094187 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-log" containerID="cri-o://f61f82d84740657f7504dc964d2bc873d6cfe36619885f4f394b61252e48278f" gracePeriod=30 Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.094324 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-httpd" containerID="cri-o://26750336b1400e71f39efa747560ad7e2d3a1a6aee23d9c2d75663821e9b5571" gracePeriod=30 Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.120373 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk4vk\" (UniqueName: \"kubernetes.io/projected/4165abed-a9b8-43c9-80c1-42ae474be304-kube-api-access-jk4vk\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.120809 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-config-data\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.120874 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-scripts\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.120903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4165abed-a9b8-43c9-80c1-42ae474be304-horizon-secret-key\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.120926 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4165abed-a9b8-43c9-80c1-42ae474be304-logs\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.122271 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-scripts\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.122511 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4165abed-a9b8-43c9-80c1-42ae474be304-logs\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.123105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-config-data\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.127080 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c88bb756f-fq5dl"] Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.132064 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4165abed-a9b8-43c9-80c1-42ae474be304-horizon-secret-key\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.136076 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.139690 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c88bb756f-fq5dl"] Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.142527 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk4vk\" (UniqueName: \"kubernetes.io/projected/4165abed-a9b8-43c9-80c1-42ae474be304-kube-api-access-jk4vk\") pod \"horizon-7454d999d9-8tqk5\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.225718 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-scripts\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.225783 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-config-data\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.225820 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2k8\" (UniqueName: \"kubernetes.io/projected/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-kube-api-access-jb2k8\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.225889 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-logs\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.225918 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-horizon-secret-key\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.292972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.328120 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2k8\" (UniqueName: \"kubernetes.io/projected/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-kube-api-access-jb2k8\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.328233 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-logs\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.328270 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-horizon-secret-key\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.328390 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-scripts\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.328468 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-config-data\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.330358 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-config-data\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.330489 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-logs\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.330980 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-scripts\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.338970 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-horizon-secret-key\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.346602 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2k8\" (UniqueName: \"kubernetes.io/projected/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-kube-api-access-jb2k8\") pod \"horizon-c88bb756f-fq5dl\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.579251 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.598770 4861 generic.go:334] "Generic (PLEG): container finished" podID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerID="f61f82d84740657f7504dc964d2bc873d6cfe36619885f4f394b61252e48278f" exitCode=143 Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.598839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"021ec6a5-260b-477c-93e4-34bfaf2fc552","Type":"ContainerDied","Data":"f61f82d84740657f7504dc964d2bc873d6cfe36619885f4f394b61252e48278f"} Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.602554 4861 generic.go:334] "Generic (PLEG): container finished" podID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerID="05ac8e3ded612160ff753b5e1d6f51b9868a7db4c6ce3e6132674a1bf7f7bb3f" exitCode=143 Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.602596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49","Type":"ContainerDied","Data":"05ac8e3ded612160ff753b5e1d6f51b9868a7db4c6ce3e6132674a1bf7f7bb3f"} Feb 19 14:52:35 crc kubenswrapper[4861]: I0219 14:52:35.752135 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7454d999d9-8tqk5"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.087325 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c88bb756f-fq5dl"] Feb 19 14:52:36 crc kubenswrapper[4861]: W0219 14:52:36.092188 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e705cc4_bb72_4663_b21c_5e1c39a5d6f4.slice/crio-ae1fcfa773c5ca58a10c36d055482cd36f7b71090b7ea85d95369fdd2f9b7c87 WatchSource:0}: Error finding container ae1fcfa773c5ca58a10c36d055482cd36f7b71090b7ea85d95369fdd2f9b7c87: Status 404 returned error can't find the container with id ae1fcfa773c5ca58a10c36d055482cd36f7b71090b7ea85d95369fdd2f9b7c87 Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.617974 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88bb756f-fq5dl" event={"ID":"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4","Type":"ContainerStarted","Data":"ae1fcfa773c5ca58a10c36d055482cd36f7b71090b7ea85d95369fdd2f9b7c87"} Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.619308 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7454d999d9-8tqk5" event={"ID":"4165abed-a9b8-43c9-80c1-42ae474be304","Type":"ContainerStarted","Data":"cafbadd4f1d4958e0f373997955c13cb4e651a34b46220301ef9b1745c79eb69"} Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.789302 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7454d999d9-8tqk5"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.825215 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65457d8474-5xm9j"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.826946 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.830558 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.835734 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65457d8474-5xm9j"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.899929 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c88bb756f-fq5dl"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.925695 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d645b4cf8-qqgr8"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.927378 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.947220 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d645b4cf8-qqgr8"] Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-secret-key\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpk52\" (UniqueName: \"kubernetes.io/projected/cad67169-610b-4002-9697-39065316dbd6-kube-api-access-wpk52\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958482 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-tls-certs\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958510 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-scripts\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958546 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-config-data\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958594 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad67169-610b-4002-9697-39065316dbd6-logs\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:36 crc kubenswrapper[4861]: I0219 14:52:36.958634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-combined-ca-bundle\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.060632 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-combined-ca-bundle\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.060692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-combined-ca-bundle\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.060714 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-secret-key\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.060754 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-secret-key\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.060861 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpk52\" (UniqueName: \"kubernetes.io/projected/cad67169-610b-4002-9697-39065316dbd6-kube-api-access-wpk52\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.060945 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-config-data\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061212 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-tls-certs\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061261 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-scripts\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061352 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-config-data\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbts\" (UniqueName: \"kubernetes.io/projected/c5d6a437-b768-4701-a69a-1b99fd4f2626-kube-api-access-nnbts\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061436 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-tls-certs\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061462 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-scripts\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061551 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad67169-610b-4002-9697-39065316dbd6-logs\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.061633 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d6a437-b768-4701-a69a-1b99fd4f2626-logs\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.062099 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad67169-610b-4002-9697-39065316dbd6-logs\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.062784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-scripts\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.063244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-config-data\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.066591 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-tls-certs\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.070207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-secret-key\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.078075 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-combined-ca-bundle\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.079825 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpk52\" (UniqueName: \"kubernetes.io/projected/cad67169-610b-4002-9697-39065316dbd6-kube-api-access-wpk52\") pod \"horizon-65457d8474-5xm9j\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.157970 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.164272 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnbts\" (UniqueName: \"kubernetes.io/projected/c5d6a437-b768-4701-a69a-1b99fd4f2626-kube-api-access-nnbts\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.164634 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-tls-certs\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.164905 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-scripts\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.165178 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d6a437-b768-4701-a69a-1b99fd4f2626-logs\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.165230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-combined-ca-bundle\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.165275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-secret-key\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.165342 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-config-data\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.165589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d6a437-b768-4701-a69a-1b99fd4f2626-logs\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.165692 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-scripts\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.168093 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-config-data\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.171910 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-combined-ca-bundle\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.172454 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-secret-key\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.172589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-tls-certs\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.189254 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnbts\" (UniqueName: \"kubernetes.io/projected/c5d6a437-b768-4701-a69a-1b99fd4f2626-kube-api-access-nnbts\") pod \"horizon-6d645b4cf8-qqgr8\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.248903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:37 crc kubenswrapper[4861]: W0219 14:52:37.749459 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcad67169_610b_4002_9697_39065316dbd6.slice/crio-b7e9026f0b74b3d668b22f9e8da5254f24a0e44b581507dc5146ac7ba938be7a WatchSource:0}: Error finding container b7e9026f0b74b3d668b22f9e8da5254f24a0e44b581507dc5146ac7ba938be7a: Status 404 returned error can't find the container with id b7e9026f0b74b3d668b22f9e8da5254f24a0e44b581507dc5146ac7ba938be7a Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.773936 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65457d8474-5xm9j"] Feb 19 14:52:37 crc kubenswrapper[4861]: I0219 14:52:37.796859 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d645b4cf8-qqgr8"] Feb 19 14:52:38 crc kubenswrapper[4861]: I0219 14:52:38.648269 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65457d8474-5xm9j" event={"ID":"cad67169-610b-4002-9697-39065316dbd6","Type":"ContainerStarted","Data":"b7e9026f0b74b3d668b22f9e8da5254f24a0e44b581507dc5146ac7ba938be7a"} Feb 19 14:52:38 crc kubenswrapper[4861]: I0219 14:52:38.649389 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d645b4cf8-qqgr8" event={"ID":"c5d6a437-b768-4701-a69a-1b99fd4f2626","Type":"ContainerStarted","Data":"810182c6d62c954ffe956920b4b5faff4f4469845dce68a6b231a1a3fc3d63db"} Feb 19 14:52:38 crc kubenswrapper[4861]: I0219 14:52:38.653446 4861 generic.go:334] "Generic (PLEG): container finished" podID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerID="26750336b1400e71f39efa747560ad7e2d3a1a6aee23d9c2d75663821e9b5571" exitCode=0 Feb 19 14:52:38 crc kubenswrapper[4861]: I0219 14:52:38.653503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"021ec6a5-260b-477c-93e4-34bfaf2fc552","Type":"ContainerDied","Data":"26750336b1400e71f39efa747560ad7e2d3a1a6aee23d9c2d75663821e9b5571"} Feb 19 14:52:38 crc kubenswrapper[4861]: I0219 14:52:38.656563 4861 generic.go:334] "Generic (PLEG): container finished" podID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerID="3abef79a76ef88b984f328555284e34f3b9e552293c9b1184be99ecfdd4fc45f" exitCode=0 Feb 19 14:52:38 crc kubenswrapper[4861]: I0219 14:52:38.656626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49","Type":"ContainerDied","Data":"3abef79a76ef88b984f328555284e34f3b9e552293c9b1184be99ecfdd4fc45f"} Feb 19 14:52:40 crc kubenswrapper[4861]: I0219 14:52:40.976568 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:52:40 crc kubenswrapper[4861]: E0219 14:52:40.977128 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.700326 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"021ec6a5-260b-477c-93e4-34bfaf2fc552","Type":"ContainerDied","Data":"7d406ee8a2f950e0caa564c493a07feb0cab600048f9f6095acf665618f224e2"} Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.700816 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d406ee8a2f950e0caa564c493a07feb0cab600048f9f6095acf665618f224e2" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.706453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49","Type":"ContainerDied","Data":"2c190b311a0933bc1aca58c5989b54fb9723a1b92167a26e7816f63a5be4f28a"} Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.706491 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c190b311a0933bc1aca58c5989b54fb9723a1b92167a26e7816f63a5be4f28a" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.771412 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.776685 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.935683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-config-data\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.935945 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-combined-ca-bundle\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936002 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-combined-ca-bundle\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936052 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-scripts\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936086 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-httpd-run\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936109 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-httpd-run\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936130 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-logs\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936157 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qnj7\" (UniqueName: \"kubernetes.io/projected/021ec6a5-260b-477c-93e4-34bfaf2fc552-kube-api-access-4qnj7\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936179 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-logs\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936202 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-public-tls-certs\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936255 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-config-data\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936272 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxx2\" (UniqueName: \"kubernetes.io/projected/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-kube-api-access-fvxx2\") pod \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\" (UID: \"0cf07c1e-8241-487d-99bb-6e4ae9d8cf49\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936288 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-scripts\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.936330 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-internal-tls-certs\") pod \"021ec6a5-260b-477c-93e4-34bfaf2fc552\" (UID: \"021ec6a5-260b-477c-93e4-34bfaf2fc552\") " Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.937579 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-logs" (OuterVolumeSpecName: "logs") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.938102 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.938665 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.940576 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-logs" (OuterVolumeSpecName: "logs") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.943054 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021ec6a5-260b-477c-93e4-34bfaf2fc552-kube-api-access-4qnj7" (OuterVolumeSpecName: "kube-api-access-4qnj7") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "kube-api-access-4qnj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.952662 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-kube-api-access-fvxx2" (OuterVolumeSpecName: "kube-api-access-fvxx2") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "kube-api-access-fvxx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.969595 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-scripts" (OuterVolumeSpecName: "scripts") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.975627 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-scripts" (OuterVolumeSpecName: "scripts") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:43 crc kubenswrapper[4861]: I0219 14:52:43.996310 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.015336 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.016688 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-config-data" (OuterVolumeSpecName: "config-data") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.030610 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-config-data" (OuterVolumeSpecName: "config-data") pod "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" (UID: "0cf07c1e-8241-487d-99bb-6e4ae9d8cf49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.034606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039159 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039186 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039195 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039204 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039215 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039222 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039229 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039237 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qnj7\" (UniqueName: \"kubernetes.io/projected/021ec6a5-260b-477c-93e4-34bfaf2fc552-kube-api-access-4qnj7\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039246 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/021ec6a5-260b-477c-93e4-34bfaf2fc552-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039254 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039263 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxx2\" (UniqueName: \"kubernetes.io/projected/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-kube-api-access-fvxx2\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039270 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.039277 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.047807 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "021ec6a5-260b-477c-93e4-34bfaf2fc552" (UID: "021ec6a5-260b-477c-93e4-34bfaf2fc552"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.142705 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/021ec6a5-260b-477c-93e4-34bfaf2fc552-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.723324 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88bb756f-fq5dl" event={"ID":"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4","Type":"ContainerStarted","Data":"9a0e77c939316376b9d47c5144adc41dcf1ed905319617b9b9e91460a8fad161"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.724145 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88bb756f-fq5dl" event={"ID":"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4","Type":"ContainerStarted","Data":"bdb56c84fa76f4aebb9ef2b661e5badfbf91304e2b2928ae7a2bd6316f99af71"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.724917 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c88bb756f-fq5dl" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon-log" containerID="cri-o://bdb56c84fa76f4aebb9ef2b661e5badfbf91304e2b2928ae7a2bd6316f99af71" gracePeriod=30 Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.725331 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c88bb756f-fq5dl" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon" containerID="cri-o://9a0e77c939316376b9d47c5144adc41dcf1ed905319617b9b9e91460a8fad161" gracePeriod=30 Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.732916 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65457d8474-5xm9j" event={"ID":"cad67169-610b-4002-9697-39065316dbd6","Type":"ContainerStarted","Data":"a2eeec93eb03992d501ad67f1a667caf676fee700cd8ef67af99d2ada6e40f92"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.732961 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65457d8474-5xm9j" event={"ID":"cad67169-610b-4002-9697-39065316dbd6","Type":"ContainerStarted","Data":"eb13856d49ba7a69f1d212405e9c08f97bdf525d9a45c65013a80bda1c6217f5"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.739722 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d645b4cf8-qqgr8" event={"ID":"c5d6a437-b768-4701-a69a-1b99fd4f2626","Type":"ContainerStarted","Data":"7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.739773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d645b4cf8-qqgr8" event={"ID":"c5d6a437-b768-4701-a69a-1b99fd4f2626","Type":"ContainerStarted","Data":"b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.755776 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.755929 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7454d999d9-8tqk5" event={"ID":"4165abed-a9b8-43c9-80c1-42ae474be304","Type":"ContainerStarted","Data":"dcc0083bcfa074d09671fb58dea33512704a3d96209ea0199da3ba165fa52ee3"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.755964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7454d999d9-8tqk5" event={"ID":"4165abed-a9b8-43c9-80c1-42ae474be304","Type":"ContainerStarted","Data":"a970e78e149e55c6f79753a5dca0a0727b1089a08df1a2f025aaeada8afdc49f"} Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.756067 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7454d999d9-8tqk5" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon-log" containerID="cri-o://a970e78e149e55c6f79753a5dca0a0727b1089a08df1a2f025aaeada8afdc49f" gracePeriod=30 Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.756186 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7454d999d9-8tqk5" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon" containerID="cri-o://dcc0083bcfa074d09671fb58dea33512704a3d96209ea0199da3ba165fa52ee3" gracePeriod=30 Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.756396 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.782403 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65457d8474-5xm9j" podStartSLOduration=2.6476398469999998 podStartE2EDuration="8.782380299s" podCreationTimestamp="2026-02-19 14:52:36 +0000 UTC" firstStartedPulling="2026-02-19 14:52:37.758802778 +0000 UTC m=+6172.419905996" lastFinishedPulling="2026-02-19 14:52:43.89354322 +0000 UTC m=+6178.554646448" observedRunningTime="2026-02-19 14:52:44.769702857 +0000 UTC m=+6179.430806085" watchObservedRunningTime="2026-02-19 14:52:44.782380299 +0000 UTC m=+6179.443483547" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.782962 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c88bb756f-fq5dl" podStartSLOduration=2.132439292 podStartE2EDuration="9.782956864s" podCreationTimestamp="2026-02-19 14:52:35 +0000 UTC" firstStartedPulling="2026-02-19 14:52:36.095194498 +0000 UTC m=+6170.756297736" lastFinishedPulling="2026-02-19 14:52:43.74571204 +0000 UTC m=+6178.406815308" observedRunningTime="2026-02-19 14:52:44.753104549 +0000 UTC m=+6179.414207777" watchObservedRunningTime="2026-02-19 14:52:44.782956864 +0000 UTC m=+6179.444060102" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.798406 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d645b4cf8-qqgr8" podStartSLOduration=2.842799754 podStartE2EDuration="8.79838094s" podCreationTimestamp="2026-02-19 14:52:36 +0000 UTC" firstStartedPulling="2026-02-19 14:52:37.768696065 +0000 UTC m=+6172.429799293" lastFinishedPulling="2026-02-19 14:52:43.724277211 +0000 UTC m=+6178.385380479" observedRunningTime="2026-02-19 14:52:44.784402273 +0000 UTC m=+6179.445505501" watchObservedRunningTime="2026-02-19 14:52:44.79838094 +0000 UTC m=+6179.459484168" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.822215 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.840212 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.863763 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7454d999d9-8tqk5" podStartSLOduration=2.843883136 podStartE2EDuration="10.863740565s" podCreationTimestamp="2026-02-19 14:52:34 +0000 UTC" firstStartedPulling="2026-02-19 14:52:35.764592016 +0000 UTC m=+6170.425695244" lastFinishedPulling="2026-02-19 14:52:43.784449425 +0000 UTC m=+6178.445552673" observedRunningTime="2026-02-19 14:52:44.814615949 +0000 UTC m=+6179.475719197" watchObservedRunningTime="2026-02-19 14:52:44.863740565 +0000 UTC m=+6179.524843793" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.883789 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: E0219 14:52:44.884437 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-httpd" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.884537 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-httpd" Feb 19 14:52:44 crc kubenswrapper[4861]: E0219 14:52:44.884603 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-log" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.884673 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-log" Feb 19 14:52:44 crc kubenswrapper[4861]: E0219 14:52:44.884735 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-log" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.884790 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-log" Feb 19 14:52:44 crc kubenswrapper[4861]: E0219 14:52:44.884864 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-httpd" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.884928 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-httpd" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.885167 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-httpd" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.885241 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-log" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.885329 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" containerName="glance-httpd" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.885399 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" containerName="glance-log" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.886580 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.890856 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.891173 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t7qfz" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.891909 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.892170 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.892410 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.908849 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.933102 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.953167 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.955085 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.957403 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.960277 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 14:52:44 crc kubenswrapper[4861]: I0219 14:52:44.967650 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.067979 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41b1099-5c94-4231-8dad-7204d5078381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068031 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jch\" (UniqueName: \"kubernetes.io/projected/c41b1099-5c94-4231-8dad-7204d5078381-kube-api-access-l8jch\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8237d625-40a8-4b4c-a789-7ca255e19437-logs\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068120 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41b1099-5c94-4231-8dad-7204d5078381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068142 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068172 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz444\" (UniqueName: \"kubernetes.io/projected/8237d625-40a8-4b4c-a789-7ca255e19437-kube-api-access-dz444\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068196 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068221 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-scripts\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068283 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8237d625-40a8-4b4c-a789-7ca255e19437-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068436 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-config-data\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.068464 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz444\" (UniqueName: \"kubernetes.io/projected/8237d625-40a8-4b4c-a789-7ca255e19437-kube-api-access-dz444\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170569 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170610 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-scripts\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170680 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8237d625-40a8-4b4c-a789-7ca255e19437-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170715 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170774 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-config-data\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41b1099-5c94-4231-8dad-7204d5078381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jch\" (UniqueName: \"kubernetes.io/projected/c41b1099-5c94-4231-8dad-7204d5078381-kube-api-access-l8jch\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170926 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8237d625-40a8-4b4c-a789-7ca255e19437-logs\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170949 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41b1099-5c94-4231-8dad-7204d5078381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.170965 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.171771 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8237d625-40a8-4b4c-a789-7ca255e19437-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.172172 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8237d625-40a8-4b4c-a789-7ca255e19437-logs\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.172517 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41b1099-5c94-4231-8dad-7204d5078381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.172739 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41b1099-5c94-4231-8dad-7204d5078381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.181203 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.181740 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-config-data\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.183414 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.183624 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.184229 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-scripts\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.184300 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8237d625-40a8-4b4c-a789-7ca255e19437-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.184517 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.184803 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b1099-5c94-4231-8dad-7204d5078381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.188549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz444\" (UniqueName: \"kubernetes.io/projected/8237d625-40a8-4b4c-a789-7ca255e19437-kube-api-access-dz444\") pod \"glance-default-external-api-0\" (UID: \"8237d625-40a8-4b4c-a789-7ca255e19437\") " pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.192994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jch\" (UniqueName: \"kubernetes.io/projected/c41b1099-5c94-4231-8dad-7204d5078381-kube-api-access-l8jch\") pod \"glance-default-internal-api-0\" (UID: \"c41b1099-5c94-4231-8dad-7204d5078381\") " pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.212597 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.271197 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.293527 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.580530 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:52:45 crc kubenswrapper[4861]: I0219 14:52:45.793790 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.038302 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021ec6a5-260b-477c-93e4-34bfaf2fc552" path="/var/lib/kubelet/pods/021ec6a5-260b-477c-93e4-34bfaf2fc552/volumes" Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.039511 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf07c1e-8241-487d-99bb-6e4ae9d8cf49" path="/var/lib/kubelet/pods/0cf07c1e-8241-487d-99bb-6e4ae9d8cf49/volumes" Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.044011 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.782684 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c41b1099-5c94-4231-8dad-7204d5078381","Type":"ContainerStarted","Data":"60d2b4786ddf01968a21da898ba9c2a9be2a53f375d2a2161108351cef338200"} Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.783086 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c41b1099-5c94-4231-8dad-7204d5078381","Type":"ContainerStarted","Data":"78474b94a74580b0add42547057509e511e395666e165a532199d82a050f90df"} Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.784247 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8237d625-40a8-4b4c-a789-7ca255e19437","Type":"ContainerStarted","Data":"ae6c3e8c60b26d86f9a4c08bfc108b1e4a059dd8876d563691dd0f1c2b81dd82"} Feb 19 14:52:46 crc kubenswrapper[4861]: I0219 14:52:46.784287 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8237d625-40a8-4b4c-a789-7ca255e19437","Type":"ContainerStarted","Data":"ac12fbec0cfdb79903545be2d6c23c8d5cf02d7833ca27fd33c3f1939a06d8c5"} Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.158850 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.159870 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.249072 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.249458 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.798963 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8237d625-40a8-4b4c-a789-7ca255e19437","Type":"ContainerStarted","Data":"6ce24f5f38d77bc464096978fe2bcd88c83d144747d260e1f813462340e27ad6"} Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.802053 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c41b1099-5c94-4231-8dad-7204d5078381","Type":"ContainerStarted","Data":"3269983b3733034d3ed2f21789e19fc90e95ac52aff774900dd71b366d369f20"} Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.821536 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.821516172 podStartE2EDuration="3.821516172s" podCreationTimestamp="2026-02-19 14:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:52:47.816642401 +0000 UTC m=+6182.477745629" watchObservedRunningTime="2026-02-19 14:52:47.821516172 +0000 UTC m=+6182.482619420" Feb 19 14:52:47 crc kubenswrapper[4861]: I0219 14:52:47.848600 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.848578943 podStartE2EDuration="3.848578943s" podCreationTimestamp="2026-02-19 14:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:52:47.837251498 +0000 UTC m=+6182.498354726" watchObservedRunningTime="2026-02-19 14:52:47.848578943 +0000 UTC m=+6182.509682181" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.213149 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.213779 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.266823 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.278807 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.278875 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.286776 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.341157 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.346909 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.904206 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.904241 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.904251 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:55 crc kubenswrapper[4861]: I0219 14:52:55.904393 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:56 crc kubenswrapper[4861]: I0219 14:52:56.003084 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:52:56 crc kubenswrapper[4861]: E0219 14:52:56.003385 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:52:57 crc kubenswrapper[4861]: I0219 14:52:57.160868 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65457d8474-5xm9j" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Feb 19 14:52:57 crc kubenswrapper[4861]: I0219 14:52:57.251237 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d645b4cf8-qqgr8" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Feb 19 14:52:57 crc kubenswrapper[4861]: I0219 14:52:57.730946 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 14:52:57 crc kubenswrapper[4861]: I0219 14:52:57.921645 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 14:52:58 crc kubenswrapper[4861]: I0219 14:52:58.011497 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:58 crc kubenswrapper[4861]: I0219 14:52:58.011589 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 14:52:58 crc kubenswrapper[4861]: I0219 14:52:58.015863 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 14:52:58 crc kubenswrapper[4861]: I0219 14:52:58.060189 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.724772 4861 scope.go:117] "RemoveContainer" containerID="3abef79a76ef88b984f328555284e34f3b9e552293c9b1184be99ecfdd4fc45f" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.760025 4861 scope.go:117] "RemoveContainer" containerID="f61f82d84740657f7504dc964d2bc873d6cfe36619885f4f394b61252e48278f" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.788324 4861 scope.go:117] "RemoveContainer" containerID="05ac8e3ded612160ff753b5e1d6f51b9868a7db4c6ce3e6132674a1bf7f7bb3f" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.820457 4861 scope.go:117] "RemoveContainer" containerID="2a7c8d29dd409300d78efabce9b9a3aac8ae88905d33d01cc032e2eb1b0f3998" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.863622 4861 scope.go:117] "RemoveContainer" containerID="f2abd28c71a3dde201ea408a183b5d29c7ce3cff3a661bae175bb6817eddb0be" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.912666 4861 scope.go:117] "RemoveContainer" containerID="0cfc31d75d350cf6d4d224794857aecaadaa916a2b4482c792794e647e04fa99" Feb 19 14:53:05 crc kubenswrapper[4861]: I0219 14:53:05.949864 4861 scope.go:117] "RemoveContainer" containerID="26750336b1400e71f39efa747560ad7e2d3a1a6aee23d9c2d75663821e9b5571" Feb 19 14:53:08 crc kubenswrapper[4861]: I0219 14:53:08.073615 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pn49r"] Feb 19 14:53:08 crc kubenswrapper[4861]: I0219 14:53:08.097335 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b5a5-account-create-update-5vbwj"] Feb 19 14:53:08 crc kubenswrapper[4861]: I0219 14:53:08.112904 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pn49r"] Feb 19 14:53:08 crc kubenswrapper[4861]: I0219 14:53:08.123007 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b5a5-account-create-update-5vbwj"] Feb 19 14:53:08 crc kubenswrapper[4861]: I0219 14:53:08.821350 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:53:09 crc kubenswrapper[4861]: I0219 14:53:09.072740 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:53:09 crc kubenswrapper[4861]: I0219 14:53:09.988413 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050b6f04-ae24-4a20-9e72-e46486c55baf" path="/var/lib/kubelet/pods/050b6f04-ae24-4a20-9e72-e46486c55baf/volumes" Feb 19 14:53:09 crc kubenswrapper[4861]: I0219 14:53:09.989710 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7aa1b35-d0af-428e-9801-e73311fa6e9c" path="/var/lib/kubelet/pods/c7aa1b35-d0af-428e-9801-e73311fa6e9c/volumes" Feb 19 14:53:10 crc kubenswrapper[4861]: I0219 14:53:10.445053 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:53:10 crc kubenswrapper[4861]: I0219 14:53:10.823572 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:53:10 crc kubenswrapper[4861]: I0219 14:53:10.916169 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65457d8474-5xm9j"] Feb 19 14:53:10 crc kubenswrapper[4861]: I0219 14:53:10.978431 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:53:10 crc kubenswrapper[4861]: E0219 14:53:10.978937 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:53:11 crc kubenswrapper[4861]: I0219 14:53:11.086912 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65457d8474-5xm9j" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon-log" containerID="cri-o://eb13856d49ba7a69f1d212405e9c08f97bdf525d9a45c65013a80bda1c6217f5" gracePeriod=30 Feb 19 14:53:11 crc kubenswrapper[4861]: I0219 14:53:11.087052 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65457d8474-5xm9j" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" containerID="cri-o://a2eeec93eb03992d501ad67f1a667caf676fee700cd8ef67af99d2ada6e40f92" gracePeriod=30 Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.185954 4861 generic.go:334] "Generic (PLEG): container finished" podID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerID="9a0e77c939316376b9d47c5144adc41dcf1ed905319617b9b9e91460a8fad161" exitCode=137 Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.186559 4861 generic.go:334] "Generic (PLEG): container finished" podID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerID="bdb56c84fa76f4aebb9ef2b661e5badfbf91304e2b2928ae7a2bd6316f99af71" exitCode=137 Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.186029 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88bb756f-fq5dl" event={"ID":"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4","Type":"ContainerDied","Data":"9a0e77c939316376b9d47c5144adc41dcf1ed905319617b9b9e91460a8fad161"} Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.186678 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88bb756f-fq5dl" event={"ID":"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4","Type":"ContainerDied","Data":"bdb56c84fa76f4aebb9ef2b661e5badfbf91304e2b2928ae7a2bd6316f99af71"} Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.189465 4861 generic.go:334] "Generic (PLEG): container finished" podID="cad67169-610b-4002-9697-39065316dbd6" containerID="a2eeec93eb03992d501ad67f1a667caf676fee700cd8ef67af99d2ada6e40f92" exitCode=0 Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.189515 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65457d8474-5xm9j" event={"ID":"cad67169-610b-4002-9697-39065316dbd6","Type":"ContainerDied","Data":"a2eeec93eb03992d501ad67f1a667caf676fee700cd8ef67af99d2ada6e40f92"} Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.191535 4861 generic.go:334] "Generic (PLEG): container finished" podID="4165abed-a9b8-43c9-80c1-42ae474be304" containerID="dcc0083bcfa074d09671fb58dea33512704a3d96209ea0199da3ba165fa52ee3" exitCode=137 Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.191553 4861 generic.go:334] "Generic (PLEG): container finished" podID="4165abed-a9b8-43c9-80c1-42ae474be304" containerID="a970e78e149e55c6f79753a5dca0a0727b1089a08df1a2f025aaeada8afdc49f" exitCode=137 Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.191568 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7454d999d9-8tqk5" event={"ID":"4165abed-a9b8-43c9-80c1-42ae474be304","Type":"ContainerDied","Data":"dcc0083bcfa074d09671fb58dea33512704a3d96209ea0199da3ba165fa52ee3"} Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.191602 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7454d999d9-8tqk5" event={"ID":"4165abed-a9b8-43c9-80c1-42ae474be304","Type":"ContainerDied","Data":"a970e78e149e55c6f79753a5dca0a0727b1089a08df1a2f025aaeada8afdc49f"} Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.391934 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.398129 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.505987 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4165abed-a9b8-43c9-80c1-42ae474be304-horizon-secret-key\") pod \"4165abed-a9b8-43c9-80c1-42ae474be304\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506086 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-config-data\") pod \"4165abed-a9b8-43c9-80c1-42ae474be304\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk4vk\" (UniqueName: \"kubernetes.io/projected/4165abed-a9b8-43c9-80c1-42ae474be304-kube-api-access-jk4vk\") pod \"4165abed-a9b8-43c9-80c1-42ae474be304\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506283 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-scripts\") pod \"4165abed-a9b8-43c9-80c1-42ae474be304\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506315 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-config-data\") pod \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506340 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-scripts\") pod \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506468 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4165abed-a9b8-43c9-80c1-42ae474be304-logs\") pod \"4165abed-a9b8-43c9-80c1-42ae474be304\" (UID: \"4165abed-a9b8-43c9-80c1-42ae474be304\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506495 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-logs\") pod \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506510 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb2k8\" (UniqueName: \"kubernetes.io/projected/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-kube-api-access-jb2k8\") pod \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.506531 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-horizon-secret-key\") pod \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\" (UID: \"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4\") " Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.507702 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-logs" (OuterVolumeSpecName: "logs") pod "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" (UID: "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.512378 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4165abed-a9b8-43c9-80c1-42ae474be304-logs" (OuterVolumeSpecName: "logs") pod "4165abed-a9b8-43c9-80c1-42ae474be304" (UID: "4165abed-a9b8-43c9-80c1-42ae474be304"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.516994 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4165abed-a9b8-43c9-80c1-42ae474be304-kube-api-access-jk4vk" (OuterVolumeSpecName: "kube-api-access-jk4vk") pod "4165abed-a9b8-43c9-80c1-42ae474be304" (UID: "4165abed-a9b8-43c9-80c1-42ae474be304"). InnerVolumeSpecName "kube-api-access-jk4vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.524889 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-kube-api-access-jb2k8" (OuterVolumeSpecName: "kube-api-access-jb2k8") pod "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" (UID: "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4"). InnerVolumeSpecName "kube-api-access-jb2k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.525634 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" (UID: "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.529886 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4165abed-a9b8-43c9-80c1-42ae474be304-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4165abed-a9b8-43c9-80c1-42ae474be304" (UID: "4165abed-a9b8-43c9-80c1-42ae474be304"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.538452 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-scripts" (OuterVolumeSpecName: "scripts") pod "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" (UID: "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.540591 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-config-data" (OuterVolumeSpecName: "config-data") pod "4165abed-a9b8-43c9-80c1-42ae474be304" (UID: "4165abed-a9b8-43c9-80c1-42ae474be304"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.548466 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-config-data" (OuterVolumeSpecName: "config-data") pod "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" (UID: "8e705cc4-bb72-4663-b21c-5e1c39a5d6f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:15 crc kubenswrapper[4861]: I0219 14:53:15.561990 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-scripts" (OuterVolumeSpecName: "scripts") pod "4165abed-a9b8-43c9-80c1-42ae474be304" (UID: "4165abed-a9b8-43c9-80c1-42ae474be304"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609825 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4165abed-a9b8-43c9-80c1-42ae474be304-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609872 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb2k8\" (UniqueName: \"kubernetes.io/projected/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-kube-api-access-jb2k8\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609897 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609915 4861 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609932 4861 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4165abed-a9b8-43c9-80c1-42ae474be304-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609947 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609969 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk4vk\" (UniqueName: \"kubernetes.io/projected/4165abed-a9b8-43c9-80c1-42ae474be304-kube-api-access-jk4vk\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.609986 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4165abed-a9b8-43c9-80c1-42ae474be304-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.610004 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:15.610020 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.206185 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88bb756f-fq5dl" event={"ID":"8e705cc4-bb72-4663-b21c-5e1c39a5d6f4","Type":"ContainerDied","Data":"ae1fcfa773c5ca58a10c36d055482cd36f7b71090b7ea85d95369fdd2f9b7c87"} Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.206573 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88bb756f-fq5dl" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.206605 4861 scope.go:117] "RemoveContainer" containerID="9a0e77c939316376b9d47c5144adc41dcf1ed905319617b9b9e91460a8fad161" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.211654 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7454d999d9-8tqk5" event={"ID":"4165abed-a9b8-43c9-80c1-42ae474be304","Type":"ContainerDied","Data":"cafbadd4f1d4958e0f373997955c13cb4e651a34b46220301ef9b1745c79eb69"} Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.212044 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7454d999d9-8tqk5" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.269749 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c88bb756f-fq5dl"] Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.281578 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c88bb756f-fq5dl"] Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.292020 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7454d999d9-8tqk5"] Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.302370 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7454d999d9-8tqk5"] Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.414212 4861 scope.go:117] "RemoveContainer" containerID="bdb56c84fa76f4aebb9ef2b661e5badfbf91304e2b2928ae7a2bd6316f99af71" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.450887 4861 scope.go:117] "RemoveContainer" containerID="dcc0083bcfa074d09671fb58dea33512704a3d96209ea0199da3ba165fa52ee3" Feb 19 14:53:16 crc kubenswrapper[4861]: I0219 14:53:16.665703 4861 scope.go:117] "RemoveContainer" containerID="a970e78e149e55c6f79753a5dca0a0727b1089a08df1a2f025aaeada8afdc49f" Feb 19 14:53:17 crc kubenswrapper[4861]: I0219 14:53:17.158711 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65457d8474-5xm9j" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Feb 19 14:53:17 crc kubenswrapper[4861]: I0219 14:53:17.996255 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" path="/var/lib/kubelet/pods/4165abed-a9b8-43c9-80c1-42ae474be304/volumes" Feb 19 14:53:17 crc kubenswrapper[4861]: I0219 14:53:17.998274 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" path="/var/lib/kubelet/pods/8e705cc4-bb72-4663-b21c-5e1c39a5d6f4/volumes" Feb 19 14:53:18 crc kubenswrapper[4861]: I0219 14:53:18.070560 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tp684"] Feb 19 14:53:18 crc kubenswrapper[4861]: I0219 14:53:18.084061 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tp684"] Feb 19 14:53:20 crc kubenswrapper[4861]: I0219 14:53:20.003052 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cbdc791-bf33-404f-a1ef-179edd71b787" path="/var/lib/kubelet/pods/0cbdc791-bf33-404f-a1ef-179edd71b787/volumes" Feb 19 14:53:25 crc kubenswrapper[4861]: I0219 14:53:25.984553 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:53:25 crc kubenswrapper[4861]: E0219 14:53:25.986356 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:53:27 crc kubenswrapper[4861]: I0219 14:53:27.159091 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65457d8474-5xm9j" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Feb 19 14:53:37 crc kubenswrapper[4861]: I0219 14:53:37.160554 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65457d8474-5xm9j" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Feb 19 14:53:37 crc kubenswrapper[4861]: I0219 14:53:37.161375 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:53:39 crc kubenswrapper[4861]: I0219 14:53:39.977394 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:53:39 crc kubenswrapper[4861]: E0219 14:53:39.978455 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.514787 4861 generic.go:334] "Generic (PLEG): container finished" podID="cad67169-610b-4002-9697-39065316dbd6" containerID="eb13856d49ba7a69f1d212405e9c08f97bdf525d9a45c65013a80bda1c6217f5" exitCode=137 Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.514876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65457d8474-5xm9j" event={"ID":"cad67169-610b-4002-9697-39065316dbd6","Type":"ContainerDied","Data":"eb13856d49ba7a69f1d212405e9c08f97bdf525d9a45c65013a80bda1c6217f5"} Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.636952 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715500 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad67169-610b-4002-9697-39065316dbd6-logs\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715562 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-secret-key\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715674 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-tls-certs\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715753 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-scripts\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715779 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpk52\" (UniqueName: \"kubernetes.io/projected/cad67169-610b-4002-9697-39065316dbd6-kube-api-access-wpk52\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-config-data\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.715916 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-combined-ca-bundle\") pod \"cad67169-610b-4002-9697-39065316dbd6\" (UID: \"cad67169-610b-4002-9697-39065316dbd6\") " Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.716407 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cad67169-610b-4002-9697-39065316dbd6-logs" (OuterVolumeSpecName: "logs") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.716889 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad67169-610b-4002-9697-39065316dbd6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.723702 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.724757 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad67169-610b-4002-9697-39065316dbd6-kube-api-access-wpk52" (OuterVolumeSpecName: "kube-api-access-wpk52") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "kube-api-access-wpk52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.754593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-scripts" (OuterVolumeSpecName: "scripts") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.764535 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.769811 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-config-data" (OuterVolumeSpecName: "config-data") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.795718 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "cad67169-610b-4002-9697-39065316dbd6" (UID: "cad67169-610b-4002-9697-39065316dbd6"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.821253 4861 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.821319 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.821332 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpk52\" (UniqueName: \"kubernetes.io/projected/cad67169-610b-4002-9697-39065316dbd6-kube-api-access-wpk52\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.821344 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cad67169-610b-4002-9697-39065316dbd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.821354 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:41 crc kubenswrapper[4861]: I0219 14:53:41.821368 4861 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cad67169-610b-4002-9697-39065316dbd6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:42 crc kubenswrapper[4861]: I0219 14:53:42.531360 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65457d8474-5xm9j" Feb 19 14:53:42 crc kubenswrapper[4861]: I0219 14:53:42.531148 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65457d8474-5xm9j" event={"ID":"cad67169-610b-4002-9697-39065316dbd6","Type":"ContainerDied","Data":"b7e9026f0b74b3d668b22f9e8da5254f24a0e44b581507dc5146ac7ba938be7a"} Feb 19 14:53:42 crc kubenswrapper[4861]: I0219 14:53:42.532128 4861 scope.go:117] "RemoveContainer" containerID="a2eeec93eb03992d501ad67f1a667caf676fee700cd8ef67af99d2ada6e40f92" Feb 19 14:53:42 crc kubenswrapper[4861]: I0219 14:53:42.583434 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65457d8474-5xm9j"] Feb 19 14:53:42 crc kubenswrapper[4861]: I0219 14:53:42.598540 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65457d8474-5xm9j"] Feb 19 14:53:42 crc kubenswrapper[4861]: I0219 14:53:42.814726 4861 scope.go:117] "RemoveContainer" containerID="eb13856d49ba7a69f1d212405e9c08f97bdf525d9a45c65013a80bda1c6217f5" Feb 19 14:53:44 crc kubenswrapper[4861]: I0219 14:53:44.001532 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad67169-610b-4002-9697-39065316dbd6" path="/var/lib/kubelet/pods/cad67169-610b-4002-9697-39065316dbd6/volumes" Feb 19 14:53:50 crc kubenswrapper[4861]: I0219 14:53:50.977542 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:53:50 crc kubenswrapper[4861]: E0219 14:53:50.978372 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.292389 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84f6b969fb-qdjxp"] Feb 19 14:53:52 crc kubenswrapper[4861]: E0219 14:53:52.293030 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293043 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: E0219 14:53:52.293052 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293059 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: E0219 14:53:52.293070 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293076 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: E0219 14:53:52.293090 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293097 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: E0219 14:53:52.293113 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293118 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: E0219 14:53:52.293135 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293141 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293322 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293339 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293349 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e705cc4-bb72-4663-b21c-5e1c39a5d6f4" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293358 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4165abed-a9b8-43c9-80c1-42ae474be304" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293370 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.293381 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad67169-610b-4002-9697-39065316dbd6" containerName="horizon-log" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.294306 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.318390 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84f6b969fb-qdjxp"] Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.386963 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-combined-ca-bundle\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.387028 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-horizon-tls-certs\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.387056 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-horizon-secret-key\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.387084 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905a926a-0635-4bd4-8746-ecacd708ef8a-scripts\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.387165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905a926a-0635-4bd4-8746-ecacd708ef8a-logs\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.387182 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905a926a-0635-4bd4-8746-ecacd708ef8a-config-data\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.387207 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jggf\" (UniqueName: \"kubernetes.io/projected/905a926a-0635-4bd4-8746-ecacd708ef8a-kube-api-access-6jggf\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.488832 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-combined-ca-bundle\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.488881 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-horizon-tls-certs\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.489797 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-horizon-secret-key\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.489845 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905a926a-0635-4bd4-8746-ecacd708ef8a-scripts\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.489911 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905a926a-0635-4bd4-8746-ecacd708ef8a-logs\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.489929 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905a926a-0635-4bd4-8746-ecacd708ef8a-config-data\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.489955 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jggf\" (UniqueName: \"kubernetes.io/projected/905a926a-0635-4bd4-8746-ecacd708ef8a-kube-api-access-6jggf\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.490930 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905a926a-0635-4bd4-8746-ecacd708ef8a-scripts\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.491149 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905a926a-0635-4bd4-8746-ecacd708ef8a-logs\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.492018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905a926a-0635-4bd4-8746-ecacd708ef8a-config-data\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.494984 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-horizon-secret-key\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.497214 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-horizon-tls-certs\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.497969 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905a926a-0635-4bd4-8746-ecacd708ef8a-combined-ca-bundle\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.522172 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jggf\" (UniqueName: \"kubernetes.io/projected/905a926a-0635-4bd4-8746-ecacd708ef8a-kube-api-access-6jggf\") pod \"horizon-84f6b969fb-qdjxp\" (UID: \"905a926a-0635-4bd4-8746-ecacd708ef8a\") " pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:52 crc kubenswrapper[4861]: I0219 14:53:52.620457 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.101455 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84f6b969fb-qdjxp"] Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.619178 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-9q5rx"] Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.620959 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.629694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9q5rx"] Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.663455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84f6b969fb-qdjxp" event={"ID":"905a926a-0635-4bd4-8746-ecacd708ef8a","Type":"ContainerStarted","Data":"13675858656c27514765fead8e53566996735403767eda1ccdb60df6fd44f651"} Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.663495 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84f6b969fb-qdjxp" event={"ID":"905a926a-0635-4bd4-8746-ecacd708ef8a","Type":"ContainerStarted","Data":"f95005ef2a2ab299445560407e6d6254acb6e1dcae777e39db6421fe933b8d75"} Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.663506 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84f6b969fb-qdjxp" event={"ID":"905a926a-0635-4bd4-8746-ecacd708ef8a","Type":"ContainerStarted","Data":"df69ad382320f0c9de5581288bc6cce38d7631905ca2bd7d01b672c5854737f0"} Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.688726 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84f6b969fb-qdjxp" podStartSLOduration=1.688707984 podStartE2EDuration="1.688707984s" podCreationTimestamp="2026-02-19 14:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:53:53.677403471 +0000 UTC m=+6248.338506699" watchObservedRunningTime="2026-02-19 14:53:53.688707984 +0000 UTC m=+6248.349811212" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.714998 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fd5bd-c1c7-438d-b89f-feeabe1449c0-operator-scripts\") pod \"heat-db-create-9q5rx\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.715128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqwpt\" (UniqueName: \"kubernetes.io/projected/617fd5bd-c1c7-438d-b89f-feeabe1449c0-kube-api-access-tqwpt\") pod \"heat-db-create-9q5rx\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.716803 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3d3b-account-create-update-2bdrs"] Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.718253 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.720704 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.726398 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3d3b-account-create-update-2bdrs"] Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.817846 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqwpt\" (UniqueName: \"kubernetes.io/projected/617fd5bd-c1c7-438d-b89f-feeabe1449c0-kube-api-access-tqwpt\") pod \"heat-db-create-9q5rx\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.818128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84ea320-1dee-4473-a4f7-e4c879e76f0d-operator-scripts\") pod \"heat-3d3b-account-create-update-2bdrs\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.818627 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fjg\" (UniqueName: \"kubernetes.io/projected/e84ea320-1dee-4473-a4f7-e4c879e76f0d-kube-api-access-x8fjg\") pod \"heat-3d3b-account-create-update-2bdrs\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.818869 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fd5bd-c1c7-438d-b89f-feeabe1449c0-operator-scripts\") pod \"heat-db-create-9q5rx\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.837923 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fd5bd-c1c7-438d-b89f-feeabe1449c0-operator-scripts\") pod \"heat-db-create-9q5rx\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.842298 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqwpt\" (UniqueName: \"kubernetes.io/projected/617fd5bd-c1c7-438d-b89f-feeabe1449c0-kube-api-access-tqwpt\") pod \"heat-db-create-9q5rx\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.926743 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84ea320-1dee-4473-a4f7-e4c879e76f0d-operator-scripts\") pod \"heat-3d3b-account-create-update-2bdrs\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.926858 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fjg\" (UniqueName: \"kubernetes.io/projected/e84ea320-1dee-4473-a4f7-e4c879e76f0d-kube-api-access-x8fjg\") pod \"heat-3d3b-account-create-update-2bdrs\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.927954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84ea320-1dee-4473-a4f7-e4c879e76f0d-operator-scripts\") pod \"heat-3d3b-account-create-update-2bdrs\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.944421 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:53 crc kubenswrapper[4861]: I0219 14:53:53.963596 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fjg\" (UniqueName: \"kubernetes.io/projected/e84ea320-1dee-4473-a4f7-e4c879e76f0d-kube-api-access-x8fjg\") pod \"heat-3d3b-account-create-update-2bdrs\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:54 crc kubenswrapper[4861]: I0219 14:53:54.042973 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:54 crc kubenswrapper[4861]: I0219 14:53:54.660930 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9q5rx"] Feb 19 14:53:54 crc kubenswrapper[4861]: I0219 14:53:54.674817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9q5rx" event={"ID":"617fd5bd-c1c7-438d-b89f-feeabe1449c0","Type":"ContainerStarted","Data":"67ef2bb0aa6872f4a849ce109b29a765ed625eac734fe279a216e264cfd25552"} Feb 19 14:53:54 crc kubenswrapper[4861]: I0219 14:53:54.737336 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3d3b-account-create-update-2bdrs"] Feb 19 14:53:55 crc kubenswrapper[4861]: I0219 14:53:55.689604 4861 generic.go:334] "Generic (PLEG): container finished" podID="617fd5bd-c1c7-438d-b89f-feeabe1449c0" containerID="624fe9f6d5c60206ba585267e7412f88cf73bf40bd6e666f4771a9acb6dcdf12" exitCode=0 Feb 19 14:53:55 crc kubenswrapper[4861]: I0219 14:53:55.691398 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9q5rx" event={"ID":"617fd5bd-c1c7-438d-b89f-feeabe1449c0","Type":"ContainerDied","Data":"624fe9f6d5c60206ba585267e7412f88cf73bf40bd6e666f4771a9acb6dcdf12"} Feb 19 14:53:55 crc kubenswrapper[4861]: I0219 14:53:55.694343 4861 generic.go:334] "Generic (PLEG): container finished" podID="e84ea320-1dee-4473-a4f7-e4c879e76f0d" containerID="fbe8a71debf10c1954d02b3422fee354ee1ce29f9ca66436882d019ecf1f0846" exitCode=0 Feb 19 14:53:55 crc kubenswrapper[4861]: I0219 14:53:55.694643 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3d3b-account-create-update-2bdrs" event={"ID":"e84ea320-1dee-4473-a4f7-e4c879e76f0d","Type":"ContainerDied","Data":"fbe8a71debf10c1954d02b3422fee354ee1ce29f9ca66436882d019ecf1f0846"} Feb 19 14:53:55 crc kubenswrapper[4861]: I0219 14:53:55.694801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3d3b-account-create-update-2bdrs" event={"ID":"e84ea320-1dee-4473-a4f7-e4c879e76f0d","Type":"ContainerStarted","Data":"35b335eca05d794226514385847d0fdb1b75b4b247252b66984e401e54539640"} Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.161914 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.168347 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.203115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqwpt\" (UniqueName: \"kubernetes.io/projected/617fd5bd-c1c7-438d-b89f-feeabe1449c0-kube-api-access-tqwpt\") pod \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.203172 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8fjg\" (UniqueName: \"kubernetes.io/projected/e84ea320-1dee-4473-a4f7-e4c879e76f0d-kube-api-access-x8fjg\") pod \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.203219 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fd5bd-c1c7-438d-b89f-feeabe1449c0-operator-scripts\") pod \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\" (UID: \"617fd5bd-c1c7-438d-b89f-feeabe1449c0\") " Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.203378 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84ea320-1dee-4473-a4f7-e4c879e76f0d-operator-scripts\") pod \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\" (UID: \"e84ea320-1dee-4473-a4f7-e4c879e76f0d\") " Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.204274 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84ea320-1dee-4473-a4f7-e4c879e76f0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e84ea320-1dee-4473-a4f7-e4c879e76f0d" (UID: "e84ea320-1dee-4473-a4f7-e4c879e76f0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.208120 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617fd5bd-c1c7-438d-b89f-feeabe1449c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "617fd5bd-c1c7-438d-b89f-feeabe1449c0" (UID: "617fd5bd-c1c7-438d-b89f-feeabe1449c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.222610 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84ea320-1dee-4473-a4f7-e4c879e76f0d-kube-api-access-x8fjg" (OuterVolumeSpecName: "kube-api-access-x8fjg") pod "e84ea320-1dee-4473-a4f7-e4c879e76f0d" (UID: "e84ea320-1dee-4473-a4f7-e4c879e76f0d"). InnerVolumeSpecName "kube-api-access-x8fjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.226635 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617fd5bd-c1c7-438d-b89f-feeabe1449c0-kube-api-access-tqwpt" (OuterVolumeSpecName: "kube-api-access-tqwpt") pod "617fd5bd-c1c7-438d-b89f-feeabe1449c0" (UID: "617fd5bd-c1c7-438d-b89f-feeabe1449c0"). InnerVolumeSpecName "kube-api-access-tqwpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.306046 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqwpt\" (UniqueName: \"kubernetes.io/projected/617fd5bd-c1c7-438d-b89f-feeabe1449c0-kube-api-access-tqwpt\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.306092 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8fjg\" (UniqueName: \"kubernetes.io/projected/e84ea320-1dee-4473-a4f7-e4c879e76f0d-kube-api-access-x8fjg\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.306104 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fd5bd-c1c7-438d-b89f-feeabe1449c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.306113 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84ea320-1dee-4473-a4f7-e4c879e76f0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.712901 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3d3b-account-create-update-2bdrs" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.712898 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3d3b-account-create-update-2bdrs" event={"ID":"e84ea320-1dee-4473-a4f7-e4c879e76f0d","Type":"ContainerDied","Data":"35b335eca05d794226514385847d0fdb1b75b4b247252b66984e401e54539640"} Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.713771 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b335eca05d794226514385847d0fdb1b75b4b247252b66984e401e54539640" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.717814 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9q5rx" event={"ID":"617fd5bd-c1c7-438d-b89f-feeabe1449c0","Type":"ContainerDied","Data":"67ef2bb0aa6872f4a849ce109b29a765ed625eac734fe279a216e264cfd25552"} Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.717855 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67ef2bb0aa6872f4a849ce109b29a765ed625eac734fe279a216e264cfd25552" Feb 19 14:53:57 crc kubenswrapper[4861]: I0219 14:53:57.717909 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9q5rx" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.886693 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gp2tt"] Feb 19 14:53:58 crc kubenswrapper[4861]: E0219 14:53:58.887386 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617fd5bd-c1c7-438d-b89f-feeabe1449c0" containerName="mariadb-database-create" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.887401 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="617fd5bd-c1c7-438d-b89f-feeabe1449c0" containerName="mariadb-database-create" Feb 19 14:53:58 crc kubenswrapper[4861]: E0219 14:53:58.887443 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84ea320-1dee-4473-a4f7-e4c879e76f0d" containerName="mariadb-account-create-update" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.887451 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84ea320-1dee-4473-a4f7-e4c879e76f0d" containerName="mariadb-account-create-update" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.887656 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="617fd5bd-c1c7-438d-b89f-feeabe1449c0" containerName="mariadb-database-create" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.887684 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84ea320-1dee-4473-a4f7-e4c879e76f0d" containerName="mariadb-account-create-update" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.888283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.892221 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.899075 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rzbpw" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.913371 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gp2tt"] Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.938464 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-config-data\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.938602 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmb7\" (UniqueName: \"kubernetes.io/projected/d66448e4-088d-4aa6-98cc-ada0cda02ea6-kube-api-access-vjmb7\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:58 crc kubenswrapper[4861]: I0219 14:53:58.938712 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-combined-ca-bundle\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.041908 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmb7\" (UniqueName: \"kubernetes.io/projected/d66448e4-088d-4aa6-98cc-ada0cda02ea6-kube-api-access-vjmb7\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.043259 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-combined-ca-bundle\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.043544 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-config-data\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.050927 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-combined-ca-bundle\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.063801 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-config-data\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.069195 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmb7\" (UniqueName: \"kubernetes.io/projected/d66448e4-088d-4aa6-98cc-ada0cda02ea6-kube-api-access-vjmb7\") pod \"heat-db-sync-gp2tt\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " pod="openstack/heat-db-sync-gp2tt" Feb 19 14:53:59 crc kubenswrapper[4861]: I0219 14:53:59.210956 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gp2tt" Feb 19 14:54:00 crc kubenswrapper[4861]: I0219 14:54:00.132265 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gp2tt"] Feb 19 14:54:00 crc kubenswrapper[4861]: I0219 14:54:00.783214 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gp2tt" event={"ID":"d66448e4-088d-4aa6-98cc-ada0cda02ea6","Type":"ContainerStarted","Data":"9cafd9a3777981ca5906fa2b96547943a0cba2c0b35000b03225c0eaab8c238a"} Feb 19 14:54:02 crc kubenswrapper[4861]: I0219 14:54:02.621819 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:54:02 crc kubenswrapper[4861]: I0219 14:54:02.622259 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:54:03 crc kubenswrapper[4861]: I0219 14:54:03.977854 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:54:03 crc kubenswrapper[4861]: E0219 14:54:03.978679 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:54:06 crc kubenswrapper[4861]: I0219 14:54:06.132872 4861 scope.go:117] "RemoveContainer" containerID="0ba5cc269e53ef935c359889eb10350c2acca5bb017c645e15db3985152ea7cf" Feb 19 14:54:07 crc kubenswrapper[4861]: I0219 14:54:07.232188 4861 scope.go:117] "RemoveContainer" containerID="3c80d0ddac8a0ffc2131c21c56cfc8f85641ddbe72ae4b9755d56dec5a72c1f7" Feb 19 14:54:07 crc kubenswrapper[4861]: I0219 14:54:07.639584 4861 scope.go:117] "RemoveContainer" containerID="78691913cbec1ce0ed55c6a435fddd8520c983a4f81cfa73d094d402a7840043" Feb 19 14:54:08 crc kubenswrapper[4861]: I0219 14:54:08.879839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gp2tt" event={"ID":"d66448e4-088d-4aa6-98cc-ada0cda02ea6","Type":"ContainerStarted","Data":"1e5b4767143a260830797f4fc9029ccdd70c67939c5d6234fd1e5304a50a4612"} Feb 19 14:54:08 crc kubenswrapper[4861]: I0219 14:54:08.923629 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gp2tt" podStartSLOduration=3.367550865 podStartE2EDuration="10.923600947s" podCreationTimestamp="2026-02-19 14:53:58 +0000 UTC" firstStartedPulling="2026-02-19 14:54:00.140833629 +0000 UTC m=+6254.801936857" lastFinishedPulling="2026-02-19 14:54:07.696883711 +0000 UTC m=+6262.357986939" observedRunningTime="2026-02-19 14:54:08.902623734 +0000 UTC m=+6263.563726992" watchObservedRunningTime="2026-02-19 14:54:08.923600947 +0000 UTC m=+6263.584704205" Feb 19 14:54:10 crc kubenswrapper[4861]: I0219 14:54:10.919864 4861 generic.go:334] "Generic (PLEG): container finished" podID="d66448e4-088d-4aa6-98cc-ada0cda02ea6" containerID="1e5b4767143a260830797f4fc9029ccdd70c67939c5d6234fd1e5304a50a4612" exitCode=0 Feb 19 14:54:10 crc kubenswrapper[4861]: I0219 14:54:10.919969 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gp2tt" event={"ID":"d66448e4-088d-4aa6-98cc-ada0cda02ea6","Type":"ContainerDied","Data":"1e5b4767143a260830797f4fc9029ccdd70c67939c5d6234fd1e5304a50a4612"} Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.373073 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gp2tt" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.453405 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-config-data\") pod \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.453652 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-combined-ca-bundle\") pod \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.453765 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjmb7\" (UniqueName: \"kubernetes.io/projected/d66448e4-088d-4aa6-98cc-ada0cda02ea6-kube-api-access-vjmb7\") pod \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\" (UID: \"d66448e4-088d-4aa6-98cc-ada0cda02ea6\") " Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.459329 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66448e4-088d-4aa6-98cc-ada0cda02ea6-kube-api-access-vjmb7" (OuterVolumeSpecName: "kube-api-access-vjmb7") pod "d66448e4-088d-4aa6-98cc-ada0cda02ea6" (UID: "d66448e4-088d-4aa6-98cc-ada0cda02ea6"). InnerVolumeSpecName "kube-api-access-vjmb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.489846 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d66448e4-088d-4aa6-98cc-ada0cda02ea6" (UID: "d66448e4-088d-4aa6-98cc-ada0cda02ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.556675 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-config-data" (OuterVolumeSpecName: "config-data") pod "d66448e4-088d-4aa6-98cc-ada0cda02ea6" (UID: "d66448e4-088d-4aa6-98cc-ada0cda02ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.556743 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.556775 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjmb7\" (UniqueName: \"kubernetes.io/projected/d66448e4-088d-4aa6-98cc-ada0cda02ea6-kube-api-access-vjmb7\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.659531 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d66448e4-088d-4aa6-98cc-ada0cda02ea6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.950366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gp2tt" event={"ID":"d66448e4-088d-4aa6-98cc-ada0cda02ea6","Type":"ContainerDied","Data":"9cafd9a3777981ca5906fa2b96547943a0cba2c0b35000b03225c0eaab8c238a"} Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.950614 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cafd9a3777981ca5906fa2b96547943a0cba2c0b35000b03225c0eaab8c238a" Feb 19 14:54:12 crc kubenswrapper[4861]: I0219 14:54:12.950505 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gp2tt" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.498163 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.693256 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-79677cdc5d-w6ptg"] Feb 19 14:54:14 crc kubenswrapper[4861]: E0219 14:54:14.693719 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66448e4-088d-4aa6-98cc-ada0cda02ea6" containerName="heat-db-sync" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.693737 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66448e4-088d-4aa6-98cc-ada0cda02ea6" containerName="heat-db-sync" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.693929 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66448e4-088d-4aa6-98cc-ada0cda02ea6" containerName="heat-db-sync" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.694598 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.702971 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.703231 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rzbpw" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.703377 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.712683 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79677cdc5d-w6ptg"] Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.768495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66bb748dfb-7kk8p"] Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.769667 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.771267 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.795078 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66bb748dfb-7kk8p"] Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.812622 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgz5s\" (UniqueName: \"kubernetes.io/projected/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-kube-api-access-dgz5s\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.812851 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.812888 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data-custom\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.813125 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-combined-ca-bundle\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.813305 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-combined-ca-bundle\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.813481 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.832713 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6kdq\" (UniqueName: \"kubernetes.io/projected/a51e57bb-6c52-46e5-80f2-10548e50cec2-kube-api-access-s6kdq\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.832885 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data-custom\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936525 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6kdq\" (UniqueName: \"kubernetes.io/projected/a51e57bb-6c52-46e5-80f2-10548e50cec2-kube-api-access-s6kdq\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936578 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data-custom\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936654 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgz5s\" (UniqueName: \"kubernetes.io/projected/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-kube-api-access-dgz5s\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936678 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data-custom\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936726 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-combined-ca-bundle\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936752 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-combined-ca-bundle\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.936788 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.943501 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c65b7c586-8wrjg"] Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.945170 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.951388 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-combined-ca-bundle\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.952073 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data-custom\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.958552 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.961708 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.963287 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data-custom\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.964370 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c65b7c586-8wrjg"] Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.964783 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:14 crc kubenswrapper[4861]: I0219 14:54:14.968936 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-combined-ca-bundle\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:14.999439 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgz5s\" (UniqueName: \"kubernetes.io/projected/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-kube-api-access-dgz5s\") pod \"heat-engine-79677cdc5d-w6ptg\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.030636 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6kdq\" (UniqueName: \"kubernetes.io/projected/a51e57bb-6c52-46e5-80f2-10548e50cec2-kube-api-access-s6kdq\") pod \"heat-cfnapi-66bb748dfb-7kk8p\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.041792 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xct\" (UniqueName: \"kubernetes.io/projected/377e5788-d435-4c1e-905f-64fc6e473af7-kube-api-access-z5xct\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.041938 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-combined-ca-bundle\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.041961 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.042070 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data-custom\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.048777 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.094891 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.144652 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xct\" (UniqueName: \"kubernetes.io/projected/377e5788-d435-4c1e-905f-64fc6e473af7-kube-api-access-z5xct\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.145020 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-combined-ca-bundle\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.145042 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.145104 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data-custom\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.163162 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data-custom\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.163759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-combined-ca-bundle\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.164545 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.178594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xct\" (UniqueName: \"kubernetes.io/projected/377e5788-d435-4c1e-905f-64fc6e473af7-kube-api-access-z5xct\") pod \"heat-api-7c65b7c586-8wrjg\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.244321 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.645237 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66bb748dfb-7kk8p"] Feb 19 14:54:15 crc kubenswrapper[4861]: W0219 14:54:15.845587 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd28e5052_f88a_4ac6_bc82_65be89a0d4ce.slice/crio-448c7266f694489f494b72768e58cb9bd53b36ce0172dc7b6b92dbc08ab35323 WatchSource:0}: Error finding container 448c7266f694489f494b72768e58cb9bd53b36ce0172dc7b6b92dbc08ab35323: Status 404 returned error can't find the container with id 448c7266f694489f494b72768e58cb9bd53b36ce0172dc7b6b92dbc08ab35323 Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.849587 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79677cdc5d-w6ptg"] Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.963437 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c65b7c586-8wrjg"] Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.991546 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c65b7c586-8wrjg" event={"ID":"377e5788-d435-4c1e-905f-64fc6e473af7","Type":"ContainerStarted","Data":"5def166f745bc793c878825f51f30f8d0b6d60eee0131487de0ec9546a9ebfb1"} Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.992964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" event={"ID":"a51e57bb-6c52-46e5-80f2-10548e50cec2","Type":"ContainerStarted","Data":"821b28453a88483ef3b645d94c28ce834915416bca1e7ae3888366cbb3e9f873"} Feb 19 14:54:15 crc kubenswrapper[4861]: I0219 14:54:15.993952 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79677cdc5d-w6ptg" event={"ID":"d28e5052-f88a-4ac6-bc82-65be89a0d4ce","Type":"ContainerStarted","Data":"448c7266f694489f494b72768e58cb9bd53b36ce0172dc7b6b92dbc08ab35323"} Feb 19 14:54:16 crc kubenswrapper[4861]: I0219 14:54:16.585925 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-84f6b969fb-qdjxp" Feb 19 14:54:16 crc kubenswrapper[4861]: I0219 14:54:16.644385 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d645b4cf8-qqgr8"] Feb 19 14:54:16 crc kubenswrapper[4861]: I0219 14:54:16.644624 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d645b4cf8-qqgr8" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon-log" containerID="cri-o://b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a" gracePeriod=30 Feb 19 14:54:16 crc kubenswrapper[4861]: I0219 14:54:16.645218 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d645b4cf8-qqgr8" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" containerID="cri-o://7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced" gracePeriod=30 Feb 19 14:54:16 crc kubenswrapper[4861]: I0219 14:54:16.978702 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:54:16 crc kubenswrapper[4861]: E0219 14:54:16.979225 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:54:17 crc kubenswrapper[4861]: I0219 14:54:17.007035 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79677cdc5d-w6ptg" event={"ID":"d28e5052-f88a-4ac6-bc82-65be89a0d4ce","Type":"ContainerStarted","Data":"363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094"} Feb 19 14:54:17 crc kubenswrapper[4861]: I0219 14:54:17.007170 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:17 crc kubenswrapper[4861]: I0219 14:54:17.025823 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-79677cdc5d-w6ptg" podStartSLOduration=3.025803782 podStartE2EDuration="3.025803782s" podCreationTimestamp="2026-02-19 14:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:54:17.021854206 +0000 UTC m=+6271.682957434" watchObservedRunningTime="2026-02-19 14:54:17.025803782 +0000 UTC m=+6271.686907010" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.590231 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jch5b"] Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.592804 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.601302 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jch5b"] Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.654500 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjj7m\" (UniqueName: \"kubernetes.io/projected/da61b4f6-3fcc-4b75-8d31-3db35c816789-kube-api-access-jjj7m\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.654593 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-utilities\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.654664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-catalog-content\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.755899 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-utilities\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.755992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-catalog-content\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.756075 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjj7m\" (UniqueName: \"kubernetes.io/projected/da61b4f6-3fcc-4b75-8d31-3db35c816789-kube-api-access-jjj7m\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.756869 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-utilities\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.757212 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-catalog-content\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.781958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjj7m\" (UniqueName: \"kubernetes.io/projected/da61b4f6-3fcc-4b75-8d31-3db35c816789-kube-api-access-jjj7m\") pod \"certified-operators-jch5b\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:18 crc kubenswrapper[4861]: I0219 14:54:18.911629 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.161316 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c65b7c586-8wrjg" event={"ID":"377e5788-d435-4c1e-905f-64fc6e473af7","Type":"ContainerStarted","Data":"69c350ee8b26a532049788e2339a54d7c9c6258fd5288c9124aeb29a93953e21"} Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.163206 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.165208 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" event={"ID":"a51e57bb-6c52-46e5-80f2-10548e50cec2","Type":"ContainerStarted","Data":"2bf842866715f1df77c320db3952c2328981756853ade3f9e8aa6ab85a45a94e"} Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.165974 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.186997 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c65b7c586-8wrjg" podStartSLOduration=3.279504109 podStartE2EDuration="5.186977915s" podCreationTimestamp="2026-02-19 14:54:14 +0000 UTC" firstStartedPulling="2026-02-19 14:54:15.977943731 +0000 UTC m=+6270.639046959" lastFinishedPulling="2026-02-19 14:54:17.885417537 +0000 UTC m=+6272.546520765" observedRunningTime="2026-02-19 14:54:19.185289039 +0000 UTC m=+6273.846392267" watchObservedRunningTime="2026-02-19 14:54:19.186977915 +0000 UTC m=+6273.848081143" Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.236191 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" podStartSLOduration=3.01787593 podStartE2EDuration="5.236176397s" podCreationTimestamp="2026-02-19 14:54:14 +0000 UTC" firstStartedPulling="2026-02-19 14:54:15.671479187 +0000 UTC m=+6270.332582415" lastFinishedPulling="2026-02-19 14:54:17.889779654 +0000 UTC m=+6272.550882882" observedRunningTime="2026-02-19 14:54:19.226685882 +0000 UTC m=+6273.887789110" watchObservedRunningTime="2026-02-19 14:54:19.236176397 +0000 UTC m=+6273.897279625" Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.536100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jch5b"] Feb 19 14:54:19 crc kubenswrapper[4861]: I0219 14:54:19.931306 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d645b4cf8-qqgr8" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:53256->10.217.1.125:8443: read: connection reset by peer" Feb 19 14:54:20 crc kubenswrapper[4861]: I0219 14:54:20.174292 4861 generic.go:334] "Generic (PLEG): container finished" podID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerID="7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced" exitCode=0 Feb 19 14:54:20 crc kubenswrapper[4861]: I0219 14:54:20.174359 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d645b4cf8-qqgr8" event={"ID":"c5d6a437-b768-4701-a69a-1b99fd4f2626","Type":"ContainerDied","Data":"7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced"} Feb 19 14:54:20 crc kubenswrapper[4861]: I0219 14:54:20.176309 4861 generic.go:334] "Generic (PLEG): container finished" podID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerID="8df0f7de4587ed318d4ecbc670086e0d8d1f6a56b9db99fe2abbf232fece99cd" exitCode=0 Feb 19 14:54:20 crc kubenswrapper[4861]: I0219 14:54:20.176381 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerDied","Data":"8df0f7de4587ed318d4ecbc670086e0d8d1f6a56b9db99fe2abbf232fece99cd"} Feb 19 14:54:20 crc kubenswrapper[4861]: I0219 14:54:20.176400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerStarted","Data":"dd252293615c54c4d9469ad30229ccae2df836c0bd98a31050f261877fa0506b"} Feb 19 14:54:21 crc kubenswrapper[4861]: I0219 14:54:21.185488 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerStarted","Data":"b583b6ac31669181e3d79d80f87e93f3feb2fc48451d684974c3652b6b7336cc"} Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.168161 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-59b7698fb8-spssh"] Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.172727 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.199213 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-554f48c65c-dvwd7"] Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.212947 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.253052 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data-custom\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.253690 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-59b7698fb8-spssh"] Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.253874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-combined-ca-bundle\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.253927 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.253953 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-config-data-custom\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.254097 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htm8t\" (UniqueName: \"kubernetes.io/projected/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-kube-api-access-htm8t\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.254144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-config-data\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.256857 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-combined-ca-bundle\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.256896 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6jd\" (UniqueName: \"kubernetes.io/projected/c931adc9-ac11-474c-ad7a-136c87e409d8-kube-api-access-sb6jd\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.271123 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bfd5f9d84-gzhps"] Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.278944 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.323994 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bfd5f9d84-gzhps"] Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.334777 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-554f48c65c-dvwd7"] Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358495 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-combined-ca-bundle\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358549 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358571 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-config-data-custom\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358618 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htm8t\" (UniqueName: \"kubernetes.io/projected/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-kube-api-access-htm8t\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358643 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-combined-ca-bundle\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358685 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-config-data\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358710 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data-custom\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-combined-ca-bundle\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358766 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmbbc\" (UniqueName: \"kubernetes.io/projected/1ed637d9-18f3-4aea-86fe-b4071981fd44-kube-api-access-mmbbc\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358783 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb6jd\" (UniqueName: \"kubernetes.io/projected/c931adc9-ac11-474c-ad7a-136c87e409d8-kube-api-access-sb6jd\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.358818 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data-custom\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.364110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data-custom\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.364805 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-config-data\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.365188 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-combined-ca-bundle\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.365562 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.366074 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-config-data-custom\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.368211 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-combined-ca-bundle\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.378321 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb6jd\" (UniqueName: \"kubernetes.io/projected/c931adc9-ac11-474c-ad7a-136c87e409d8-kube-api-access-sb6jd\") pod \"heat-api-554f48c65c-dvwd7\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.378327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htm8t\" (UniqueName: \"kubernetes.io/projected/e838a32a-b5f5-4ecf-9d2a-7280761b6ee8-kube-api-access-htm8t\") pod \"heat-engine-59b7698fb8-spssh\" (UID: \"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8\") " pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.461262 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.461327 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-combined-ca-bundle\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.461394 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data-custom\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.461461 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmbbc\" (UniqueName: \"kubernetes.io/projected/1ed637d9-18f3-4aea-86fe-b4071981fd44-kube-api-access-mmbbc\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.464538 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.465561 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-combined-ca-bundle\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.466224 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data-custom\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.480314 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmbbc\" (UniqueName: \"kubernetes.io/projected/1ed637d9-18f3-4aea-86fe-b4071981fd44-kube-api-access-mmbbc\") pod \"heat-cfnapi-6bfd5f9d84-gzhps\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:22 crc kubenswrapper[4861]: E0219 14:54:22.508890 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda61b4f6_3fcc_4b75_8d31_3db35c816789.slice/crio-b583b6ac31669181e3d79d80f87e93f3feb2fc48451d684974c3652b6b7336cc.scope\": RecentStats: unable to find data in memory cache]" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.511263 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.563042 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:22 crc kubenswrapper[4861]: I0219 14:54:22.603845 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.013971 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-59b7698fb8-spssh"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.131718 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-554f48c65c-dvwd7"] Feb 19 14:54:23 crc kubenswrapper[4861]: W0219 14:54:23.134428 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc931adc9_ac11_474c_ad7a_136c87e409d8.slice/crio-186a8bdc81cd16187783cce7311642cf10fba72bce191c4b2f9889143281097c WatchSource:0}: Error finding container 186a8bdc81cd16187783cce7311642cf10fba72bce191c4b2f9889143281097c: Status 404 returned error can't find the container with id 186a8bdc81cd16187783cce7311642cf10fba72bce191c4b2f9889143281097c Feb 19 14:54:23 crc kubenswrapper[4861]: W0219 14:54:23.240490 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ed637d9_18f3_4aea_86fe_b4071981fd44.slice/crio-fdc5f4d55371bd0f59e8bed5dc9c4c70ddff06735cc13d10c73caa283935355f WatchSource:0}: Error finding container fdc5f4d55371bd0f59e8bed5dc9c4c70ddff06735cc13d10c73caa283935355f: Status 404 returned error can't find the container with id fdc5f4d55371bd0f59e8bed5dc9c4c70ddff06735cc13d10c73caa283935355f Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.242173 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bfd5f9d84-gzhps"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.250449 4861 generic.go:334] "Generic (PLEG): container finished" podID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerID="b583b6ac31669181e3d79d80f87e93f3feb2fc48451d684974c3652b6b7336cc" exitCode=0 Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.250565 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerDied","Data":"b583b6ac31669181e3d79d80f87e93f3feb2fc48451d684974c3652b6b7336cc"} Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.255944 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554f48c65c-dvwd7" event={"ID":"c931adc9-ac11-474c-ad7a-136c87e409d8","Type":"ContainerStarted","Data":"186a8bdc81cd16187783cce7311642cf10fba72bce191c4b2f9889143281097c"} Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.262018 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59b7698fb8-spssh" event={"ID":"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8","Type":"ContainerStarted","Data":"31bcdb8079dd92cef69d7115ae2db2fe0f4638b72aa09fa0cf176e2ec75d9ccc"} Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.262061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59b7698fb8-spssh" event={"ID":"e838a32a-b5f5-4ecf-9d2a-7280761b6ee8","Type":"ContainerStarted","Data":"aeef22a7ff7edbf49f0f8b3dd92d4fb5652131672b6acb81c03f922b7dcc71dd"} Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.262309 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.311358 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-59b7698fb8-spssh" podStartSLOduration=1.31134192 podStartE2EDuration="1.31134192s" podCreationTimestamp="2026-02-19 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:54:23.298121746 +0000 UTC m=+6277.959225014" watchObservedRunningTime="2026-02-19 14:54:23.31134192 +0000 UTC m=+6277.972445138" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.401714 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66bb748dfb-7kk8p"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.401915 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerName="heat-cfnapi" containerID="cri-o://2bf842866715f1df77c320db3952c2328981756853ade3f9e8aa6ab85a45a94e" gracePeriod=60 Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.421956 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c65b7c586-8wrjg"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.422139 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7c65b7c586-8wrjg" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" containerName="heat-api" containerID="cri-o://69c350ee8b26a532049788e2339a54d7c9c6258fd5288c9124aeb29a93953e21" gracePeriod=60 Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.440633 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7c65b7c586-8wrjg" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.134:8004/healthcheck\": EOF" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.444514 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8689fd4cf7-qjh56"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.446259 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.446745 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.133:8000/healthcheck\": EOF" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.451170 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.451402 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.466935 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-578855784b-hjvjg"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.468130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.472085 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.472278 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.491406 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8689fd4cf7-qjh56"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.494835 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-public-tls-certs\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.494955 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-config-data\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495008 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94m2\" (UniqueName: \"kubernetes.io/projected/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-kube-api-access-j94m2\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495068 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4bc\" (UniqueName: \"kubernetes.io/projected/6353b71b-766e-410d-bd79-bc9e820919ae-kube-api-access-cx4bc\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495115 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-combined-ca-bundle\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495188 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-internal-tls-certs\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495248 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-config-data-custom\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495312 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-combined-ca-bundle\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495354 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-internal-tls-certs\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495391 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-public-tls-certs\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495433 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-config-data\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.495457 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-config-data-custom\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.512615 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-578855784b-hjvjg"] Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597708 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-internal-tls-certs\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597753 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-public-tls-certs\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-config-data\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597794 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-config-data-custom\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-public-tls-certs\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597875 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-config-data\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94m2\" (UniqueName: \"kubernetes.io/projected/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-kube-api-access-j94m2\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597948 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4bc\" (UniqueName: \"kubernetes.io/projected/6353b71b-766e-410d-bd79-bc9e820919ae-kube-api-access-cx4bc\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.597974 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-combined-ca-bundle\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.598012 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-internal-tls-certs\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.598048 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-config-data-custom\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.598083 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-combined-ca-bundle\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.610770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-public-tls-certs\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.614559 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-internal-tls-certs\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.618238 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94m2\" (UniqueName: \"kubernetes.io/projected/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-kube-api-access-j94m2\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.621007 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-internal-tls-certs\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.622051 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-combined-ca-bundle\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.622600 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-config-data-custom\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.623529 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4bc\" (UniqueName: \"kubernetes.io/projected/6353b71b-766e-410d-bd79-bc9e820919ae-kube-api-access-cx4bc\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.624018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-public-tls-certs\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.632251 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-combined-ca-bundle\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.632621 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-config-data\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.633592 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9b28ad-f37a-4283-a2ed-3f04683ffc4f-config-data\") pod \"heat-cfnapi-8689fd4cf7-qjh56\" (UID: \"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f\") " pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.634041 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6353b71b-766e-410d-bd79-bc9e820919ae-config-data-custom\") pod \"heat-api-578855784b-hjvjg\" (UID: \"6353b71b-766e-410d-bd79-bc9e820919ae\") " pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.832117 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:23 crc kubenswrapper[4861]: I0219 14:54:23.851329 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.289022 4861 generic.go:334] "Generic (PLEG): container finished" podID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerID="8d526656fb7d6157594c8760d85d5adc79e8ba77119d6e2bd451782e586afd43" exitCode=1 Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.289279 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" event={"ID":"1ed637d9-18f3-4aea-86fe-b4071981fd44","Type":"ContainerDied","Data":"8d526656fb7d6157594c8760d85d5adc79e8ba77119d6e2bd451782e586afd43"} Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.289305 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" event={"ID":"1ed637d9-18f3-4aea-86fe-b4071981fd44","Type":"ContainerStarted","Data":"fdc5f4d55371bd0f59e8bed5dc9c4c70ddff06735cc13d10c73caa283935355f"} Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.289975 4861 scope.go:117] "RemoveContainer" containerID="8d526656fb7d6157594c8760d85d5adc79e8ba77119d6e2bd451782e586afd43" Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.294864 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerStarted","Data":"bcd6306b7e2931f03558731fee4a7f1dc92f8b7d9aeaa43ba1cc0d3a012bd159"} Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.300452 4861 generic.go:334] "Generic (PLEG): container finished" podID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerID="08dae1b8940c4d20ae2367cab0382780981f1e48f64fe24125f855e414f98099" exitCode=1 Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.304809 4861 scope.go:117] "RemoveContainer" containerID="08dae1b8940c4d20ae2367cab0382780981f1e48f64fe24125f855e414f98099" Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.305291 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554f48c65c-dvwd7" event={"ID":"c931adc9-ac11-474c-ad7a-136c87e409d8","Type":"ContainerDied","Data":"08dae1b8940c4d20ae2367cab0382780981f1e48f64fe24125f855e414f98099"} Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.333226 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jch5b" podStartSLOduration=2.819042651 podStartE2EDuration="6.333208174s" podCreationTimestamp="2026-02-19 14:54:18 +0000 UTC" firstStartedPulling="2026-02-19 14:54:20.178074721 +0000 UTC m=+6274.839177949" lastFinishedPulling="2026-02-19 14:54:23.692240244 +0000 UTC m=+6278.353343472" observedRunningTime="2026-02-19 14:54:24.328918079 +0000 UTC m=+6278.990021307" watchObservedRunningTime="2026-02-19 14:54:24.333208174 +0000 UTC m=+6278.994311402" Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.406108 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-578855784b-hjvjg"] Feb 19 14:54:24 crc kubenswrapper[4861]: I0219 14:54:24.463700 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8689fd4cf7-qjh56"] Feb 19 14:54:24 crc kubenswrapper[4861]: W0219 14:54:24.469965 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9b28ad_f37a_4283_a2ed_3f04683ffc4f.slice/crio-8e292cdd87981bc15269ed7afd9350d29c8b98321687ec4b84a7f9aa25e9ab40 WatchSource:0}: Error finding container 8e292cdd87981bc15269ed7afd9350d29c8b98321687ec4b84a7f9aa25e9ab40: Status 404 returned error can't find the container with id 8e292cdd87981bc15269ed7afd9350d29c8b98321687ec4b84a7f9aa25e9ab40 Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.310579 4861 generic.go:334] "Generic (PLEG): container finished" podID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerID="65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085" exitCode=1 Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.310684 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554f48c65c-dvwd7" event={"ID":"c931adc9-ac11-474c-ad7a-136c87e409d8","Type":"ContainerDied","Data":"65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085"} Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.310984 4861 scope.go:117] "RemoveContainer" containerID="08dae1b8940c4d20ae2367cab0382780981f1e48f64fe24125f855e414f98099" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.311360 4861 scope.go:117] "RemoveContainer" containerID="65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085" Feb 19 14:54:25 crc kubenswrapper[4861]: E0219 14:54:25.312108 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-554f48c65c-dvwd7_openstack(c931adc9-ac11-474c-ad7a-136c87e409d8)\"" pod="openstack/heat-api-554f48c65c-dvwd7" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.313149 4861 generic.go:334] "Generic (PLEG): container finished" podID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerID="7ee5a91af2b6d50456b1b9765a165df1fdbc2d6b88bf9a899d51e06ceece391a" exitCode=1 Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.313191 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" event={"ID":"1ed637d9-18f3-4aea-86fe-b4071981fd44","Type":"ContainerDied","Data":"7ee5a91af2b6d50456b1b9765a165df1fdbc2d6b88bf9a899d51e06ceece391a"} Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.313838 4861 scope.go:117] "RemoveContainer" containerID="7ee5a91af2b6d50456b1b9765a165df1fdbc2d6b88bf9a899d51e06ceece391a" Feb 19 14:54:25 crc kubenswrapper[4861]: E0219 14:54:25.314086 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bfd5f9d84-gzhps_openstack(1ed637d9-18f3-4aea-86fe-b4071981fd44)\"" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.314477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" event={"ID":"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f","Type":"ContainerStarted","Data":"669c80ede1b944cc8b2327527ce62367943caf58f7077cf35c66c26cf20767cb"} Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.314502 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" event={"ID":"5e9b28ad-f37a-4283-a2ed-3f04683ffc4f","Type":"ContainerStarted","Data":"8e292cdd87981bc15269ed7afd9350d29c8b98321687ec4b84a7f9aa25e9ab40"} Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.314904 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.316921 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-578855784b-hjvjg" event={"ID":"6353b71b-766e-410d-bd79-bc9e820919ae","Type":"ContainerStarted","Data":"1cca8261feff9d6806219f2c9e355fd07ced3dbe053349ae5e890d89294b20a4"} Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.316951 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-578855784b-hjvjg" event={"ID":"6353b71b-766e-410d-bd79-bc9e820919ae","Type":"ContainerStarted","Data":"44b7ccada3e58fdd69476419ce0d4ebe812735dffe00fc390c2c7448d26f0986"} Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.317040 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.352872 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-578855784b-hjvjg" podStartSLOduration=2.352849198 podStartE2EDuration="2.352849198s" podCreationTimestamp="2026-02-19 14:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:54:25.342285874 +0000 UTC m=+6280.003389102" watchObservedRunningTime="2026-02-19 14:54:25.352849198 +0000 UTC m=+6280.013952426" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.391470 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" podStartSLOduration=2.391405224 podStartE2EDuration="2.391405224s" podCreationTimestamp="2026-02-19 14:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:54:25.390946552 +0000 UTC m=+6280.052049800" watchObservedRunningTime="2026-02-19 14:54:25.391405224 +0000 UTC m=+6280.052508452" Feb 19 14:54:25 crc kubenswrapper[4861]: I0219 14:54:25.456633 4861 scope.go:117] "RemoveContainer" containerID="8d526656fb7d6157594c8760d85d5adc79e8ba77119d6e2bd451782e586afd43" Feb 19 14:54:26 crc kubenswrapper[4861]: I0219 14:54:26.327633 4861 scope.go:117] "RemoveContainer" containerID="65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085" Feb 19 14:54:26 crc kubenswrapper[4861]: E0219 14:54:26.328036 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-554f48c65c-dvwd7_openstack(c931adc9-ac11-474c-ad7a-136c87e409d8)\"" pod="openstack/heat-api-554f48c65c-dvwd7" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" Feb 19 14:54:26 crc kubenswrapper[4861]: I0219 14:54:26.329713 4861 scope.go:117] "RemoveContainer" containerID="7ee5a91af2b6d50456b1b9765a165df1fdbc2d6b88bf9a899d51e06ceece391a" Feb 19 14:54:26 crc kubenswrapper[4861]: E0219 14:54:26.329960 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bfd5f9d84-gzhps_openstack(1ed637d9-18f3-4aea-86fe-b4071981fd44)\"" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.250084 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d645b4cf8-qqgr8" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.563517 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.564620 4861 scope.go:117] "RemoveContainer" containerID="65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085" Feb 19 14:54:27 crc kubenswrapper[4861]: E0219 14:54:27.565011 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-554f48c65c-dvwd7_openstack(c931adc9-ac11-474c-ad7a-136c87e409d8)\"" pod="openstack/heat-api-554f48c65c-dvwd7" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.565518 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.604363 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.604448 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:27 crc kubenswrapper[4861]: I0219 14:54:27.605325 4861 scope.go:117] "RemoveContainer" containerID="7ee5a91af2b6d50456b1b9765a165df1fdbc2d6b88bf9a899d51e06ceece391a" Feb 19 14:54:27 crc kubenswrapper[4861]: E0219 14:54:27.605845 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bfd5f9d84-gzhps_openstack(1ed637d9-18f3-4aea-86fe-b4071981fd44)\"" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" Feb 19 14:54:28 crc kubenswrapper[4861]: I0219 14:54:28.357013 4861 scope.go:117] "RemoveContainer" containerID="65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085" Feb 19 14:54:28 crc kubenswrapper[4861]: E0219 14:54:28.357815 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-554f48c65c-dvwd7_openstack(c931adc9-ac11-474c-ad7a-136c87e409d8)\"" pod="openstack/heat-api-554f48c65c-dvwd7" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" Feb 19 14:54:28 crc kubenswrapper[4861]: I0219 14:54:28.806737 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.133:8000/healthcheck\": read tcp 10.217.0.2:55702->10.217.1.133:8000: read: connection reset by peer" Feb 19 14:54:28 crc kubenswrapper[4861]: I0219 14:54:28.828788 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7c65b7c586-8wrjg" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.134:8004/healthcheck\": read tcp 10.217.0.2:41416->10.217.1.134:8004: read: connection reset by peer" Feb 19 14:54:28 crc kubenswrapper[4861]: I0219 14:54:28.911964 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:28 crc kubenswrapper[4861]: I0219 14:54:28.912025 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.363786 4861 generic.go:334] "Generic (PLEG): container finished" podID="377e5788-d435-4c1e-905f-64fc6e473af7" containerID="69c350ee8b26a532049788e2339a54d7c9c6258fd5288c9124aeb29a93953e21" exitCode=0 Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.363879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c65b7c586-8wrjg" event={"ID":"377e5788-d435-4c1e-905f-64fc6e473af7","Type":"ContainerDied","Data":"69c350ee8b26a532049788e2339a54d7c9c6258fd5288c9124aeb29a93953e21"} Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.364151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c65b7c586-8wrjg" event={"ID":"377e5788-d435-4c1e-905f-64fc6e473af7","Type":"ContainerDied","Data":"5def166f745bc793c878825f51f30f8d0b6d60eee0131487de0ec9546a9ebfb1"} Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.364168 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5def166f745bc793c878825f51f30f8d0b6d60eee0131487de0ec9546a9ebfb1" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.367759 4861 generic.go:334] "Generic (PLEG): container finished" podID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerID="2bf842866715f1df77c320db3952c2328981756853ade3f9e8aa6ab85a45a94e" exitCode=0 Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.367790 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" event={"ID":"a51e57bb-6c52-46e5-80f2-10548e50cec2","Type":"ContainerDied","Data":"2bf842866715f1df77c320db3952c2328981756853ade3f9e8aa6ab85a45a94e"} Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.367808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" event={"ID":"a51e57bb-6c52-46e5-80f2-10548e50cec2","Type":"ContainerDied","Data":"821b28453a88483ef3b645d94c28ce834915416bca1e7ae3888366cbb3e9f873"} Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.367818 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821b28453a88483ef3b645d94c28ce834915416bca1e7ae3888366cbb3e9f873" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.441049 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.449002 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545055 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data-custom\") pod \"a51e57bb-6c52-46e5-80f2-10548e50cec2\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545119 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-combined-ca-bundle\") pod \"377e5788-d435-4c1e-905f-64fc6e473af7\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545162 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data\") pod \"a51e57bb-6c52-46e5-80f2-10548e50cec2\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545228 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-combined-ca-bundle\") pod \"a51e57bb-6c52-46e5-80f2-10548e50cec2\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545358 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data\") pod \"377e5788-d435-4c1e-905f-64fc6e473af7\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545459 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6kdq\" (UniqueName: \"kubernetes.io/projected/a51e57bb-6c52-46e5-80f2-10548e50cec2-kube-api-access-s6kdq\") pod \"a51e57bb-6c52-46e5-80f2-10548e50cec2\" (UID: \"a51e57bb-6c52-46e5-80f2-10548e50cec2\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545510 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data-custom\") pod \"377e5788-d435-4c1e-905f-64fc6e473af7\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.545553 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5xct\" (UniqueName: \"kubernetes.io/projected/377e5788-d435-4c1e-905f-64fc6e473af7-kube-api-access-z5xct\") pod \"377e5788-d435-4c1e-905f-64fc6e473af7\" (UID: \"377e5788-d435-4c1e-905f-64fc6e473af7\") " Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.551985 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51e57bb-6c52-46e5-80f2-10548e50cec2-kube-api-access-s6kdq" (OuterVolumeSpecName: "kube-api-access-s6kdq") pod "a51e57bb-6c52-46e5-80f2-10548e50cec2" (UID: "a51e57bb-6c52-46e5-80f2-10548e50cec2"). InnerVolumeSpecName "kube-api-access-s6kdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.553287 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a51e57bb-6c52-46e5-80f2-10548e50cec2" (UID: "a51e57bb-6c52-46e5-80f2-10548e50cec2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.554149 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "377e5788-d435-4c1e-905f-64fc6e473af7" (UID: "377e5788-d435-4c1e-905f-64fc6e473af7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.557336 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377e5788-d435-4c1e-905f-64fc6e473af7-kube-api-access-z5xct" (OuterVolumeSpecName: "kube-api-access-z5xct") pod "377e5788-d435-4c1e-905f-64fc6e473af7" (UID: "377e5788-d435-4c1e-905f-64fc6e473af7"). InnerVolumeSpecName "kube-api-access-z5xct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.575361 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a51e57bb-6c52-46e5-80f2-10548e50cec2" (UID: "a51e57bb-6c52-46e5-80f2-10548e50cec2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.599235 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "377e5788-d435-4c1e-905f-64fc6e473af7" (UID: "377e5788-d435-4c1e-905f-64fc6e473af7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.608886 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data" (OuterVolumeSpecName: "config-data") pod "377e5788-d435-4c1e-905f-64fc6e473af7" (UID: "377e5788-d435-4c1e-905f-64fc6e473af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.612063 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data" (OuterVolumeSpecName: "config-data") pod "a51e57bb-6c52-46e5-80f2-10548e50cec2" (UID: "a51e57bb-6c52-46e5-80f2-10548e50cec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648692 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648757 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648771 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648782 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51e57bb-6c52-46e5-80f2-10548e50cec2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648792 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648805 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6kdq\" (UniqueName: \"kubernetes.io/projected/a51e57bb-6c52-46e5-80f2-10548e50cec2-kube-api-access-s6kdq\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648819 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/377e5788-d435-4c1e-905f-64fc6e473af7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.648831 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5xct\" (UniqueName: \"kubernetes.io/projected/377e5788-d435-4c1e-905f-64fc6e473af7-kube-api-access-z5xct\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:29 crc kubenswrapper[4861]: I0219 14:54:29.974000 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jch5b" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="registry-server" probeResult="failure" output=< Feb 19 14:54:29 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:54:29 crc kubenswrapper[4861]: > Feb 19 14:54:30 crc kubenswrapper[4861]: I0219 14:54:30.383609 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c65b7c586-8wrjg" Feb 19 14:54:30 crc kubenswrapper[4861]: I0219 14:54:30.384575 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bb748dfb-7kk8p" Feb 19 14:54:30 crc kubenswrapper[4861]: I0219 14:54:30.439869 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66bb748dfb-7kk8p"] Feb 19 14:54:30 crc kubenswrapper[4861]: I0219 14:54:30.459656 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-66bb748dfb-7kk8p"] Feb 19 14:54:30 crc kubenswrapper[4861]: I0219 14:54:30.481790 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c65b7c586-8wrjg"] Feb 19 14:54:30 crc kubenswrapper[4861]: I0219 14:54:30.501059 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7c65b7c586-8wrjg"] Feb 19 14:54:31 crc kubenswrapper[4861]: I0219 14:54:31.977134 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:54:31 crc kubenswrapper[4861]: E0219 14:54:31.978407 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:54:31 crc kubenswrapper[4861]: I0219 14:54:31.999663 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" path="/var/lib/kubelet/pods/377e5788-d435-4c1e-905f-64fc6e473af7/volumes" Feb 19 14:54:32 crc kubenswrapper[4861]: I0219 14:54:32.001213 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" path="/var/lib/kubelet/pods/a51e57bb-6c52-46e5-80f2-10548e50cec2/volumes" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.087284 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.095742 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-8689fd4cf7-qjh56" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.152181 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-578855784b-hjvjg" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.184554 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bfd5f9d84-gzhps"] Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.261461 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-554f48c65c-dvwd7"] Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.690558 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.700541 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.705652 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data\") pod \"c931adc9-ac11-474c-ad7a-136c87e409d8\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.705810 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6jd\" (UniqueName: \"kubernetes.io/projected/c931adc9-ac11-474c-ad7a-136c87e409d8-kube-api-access-sb6jd\") pod \"c931adc9-ac11-474c-ad7a-136c87e409d8\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.705927 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmbbc\" (UniqueName: \"kubernetes.io/projected/1ed637d9-18f3-4aea-86fe-b4071981fd44-kube-api-access-mmbbc\") pod \"1ed637d9-18f3-4aea-86fe-b4071981fd44\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.706027 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data-custom\") pod \"1ed637d9-18f3-4aea-86fe-b4071981fd44\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.706078 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data-custom\") pod \"c931adc9-ac11-474c-ad7a-136c87e409d8\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.706488 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data\") pod \"1ed637d9-18f3-4aea-86fe-b4071981fd44\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.706538 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-combined-ca-bundle\") pod \"1ed637d9-18f3-4aea-86fe-b4071981fd44\" (UID: \"1ed637d9-18f3-4aea-86fe-b4071981fd44\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.706607 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-combined-ca-bundle\") pod \"c931adc9-ac11-474c-ad7a-136c87e409d8\" (UID: \"c931adc9-ac11-474c-ad7a-136c87e409d8\") " Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.715511 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ed637d9-18f3-4aea-86fe-b4071981fd44" (UID: "1ed637d9-18f3-4aea-86fe-b4071981fd44"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.717031 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed637d9-18f3-4aea-86fe-b4071981fd44-kube-api-access-mmbbc" (OuterVolumeSpecName: "kube-api-access-mmbbc") pod "1ed637d9-18f3-4aea-86fe-b4071981fd44" (UID: "1ed637d9-18f3-4aea-86fe-b4071981fd44"). InnerVolumeSpecName "kube-api-access-mmbbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.727935 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c931adc9-ac11-474c-ad7a-136c87e409d8" (UID: "c931adc9-ac11-474c-ad7a-136c87e409d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.740913 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c931adc9-ac11-474c-ad7a-136c87e409d8-kube-api-access-sb6jd" (OuterVolumeSpecName: "kube-api-access-sb6jd") pod "c931adc9-ac11-474c-ad7a-136c87e409d8" (UID: "c931adc9-ac11-474c-ad7a-136c87e409d8"). InnerVolumeSpecName "kube-api-access-sb6jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.758551 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c931adc9-ac11-474c-ad7a-136c87e409d8" (UID: "c931adc9-ac11-474c-ad7a-136c87e409d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.766933 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ed637d9-18f3-4aea-86fe-b4071981fd44" (UID: "1ed637d9-18f3-4aea-86fe-b4071981fd44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.800083 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data" (OuterVolumeSpecName: "config-data") pod "c931adc9-ac11-474c-ad7a-136c87e409d8" (UID: "c931adc9-ac11-474c-ad7a-136c87e409d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809170 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809202 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809215 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809228 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6jd\" (UniqueName: \"kubernetes.io/projected/c931adc9-ac11-474c-ad7a-136c87e409d8-kube-api-access-sb6jd\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809240 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmbbc\" (UniqueName: \"kubernetes.io/projected/1ed637d9-18f3-4aea-86fe-b4071981fd44-kube-api-access-mmbbc\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809253 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.809266 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c931adc9-ac11-474c-ad7a-136c87e409d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.811598 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data" (OuterVolumeSpecName: "config-data") pod "1ed637d9-18f3-4aea-86fe-b4071981fd44" (UID: "1ed637d9-18f3-4aea-86fe-b4071981fd44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.834388 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6vzs"] Feb 19 14:54:35 crc kubenswrapper[4861]: E0219 14:54:35.834983 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.835045 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: E0219 14:54:35.835114 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.835188 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: E0219 14:54:35.835255 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.835319 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: E0219 14:54:35.835385 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.835453 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: E0219 14:54:35.835530 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.835593 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: E0219 14:54:35.835673 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.835740 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.836046 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.836127 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.836203 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="377e5788-d435-4c1e-905f-64fc6e473af7" containerName="heat-api" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.836270 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.836373 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51e57bb-6c52-46e5-80f2-10548e50cec2" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.836915 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" containerName="heat-cfnapi" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.838078 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.845385 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6vzs"] Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.910134 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-catalog-content\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.910172 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd76t\" (UniqueName: \"kubernetes.io/projected/6d577b17-c986-4d40-8471-8fcc3048ecb5-kube-api-access-cd76t\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.910358 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-utilities\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:35 crc kubenswrapper[4861]: I0219 14:54:35.910877 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed637d9-18f3-4aea-86fe-b4071981fd44-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.013587 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-utilities\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.014099 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-catalog-content\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.014216 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd76t\" (UniqueName: \"kubernetes.io/projected/6d577b17-c986-4d40-8471-8fcc3048ecb5-kube-api-access-cd76t\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.015121 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-utilities\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.016848 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-catalog-content\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.032067 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd76t\" (UniqueName: \"kubernetes.io/projected/6d577b17-c986-4d40-8471-8fcc3048ecb5-kube-api-access-cd76t\") pod \"community-operators-p6vzs\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.200977 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.447985 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" event={"ID":"1ed637d9-18f3-4aea-86fe-b4071981fd44","Type":"ContainerDied","Data":"fdc5f4d55371bd0f59e8bed5dc9c4c70ddff06735cc13d10c73caa283935355f"} Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.448336 4861 scope.go:117] "RemoveContainer" containerID="7ee5a91af2b6d50456b1b9765a165df1fdbc2d6b88bf9a899d51e06ceece391a" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.448012 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bfd5f9d84-gzhps" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.452672 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554f48c65c-dvwd7" event={"ID":"c931adc9-ac11-474c-ad7a-136c87e409d8","Type":"ContainerDied","Data":"186a8bdc81cd16187783cce7311642cf10fba72bce191c4b2f9889143281097c"} Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.452757 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554f48c65c-dvwd7" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.502831 4861 scope.go:117] "RemoveContainer" containerID="65273289282ee0b7beef789e19fab2be5554acc116eb1a1c6723594c52caf085" Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.504408 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-554f48c65c-dvwd7"] Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.515641 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-554f48c65c-dvwd7"] Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.525373 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bfd5f9d84-gzhps"] Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.535669 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6bfd5f9d84-gzhps"] Feb 19 14:54:36 crc kubenswrapper[4861]: W0219 14:54:36.665540 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d577b17_c986_4d40_8471_8fcc3048ecb5.slice/crio-a479d0ec16ddae19ce4e8dfd7aa25c6a03ca6fe6432cdb262b7081a048b3f0c3 WatchSource:0}: Error finding container a479d0ec16ddae19ce4e8dfd7aa25c6a03ca6fe6432cdb262b7081a048b3f0c3: Status 404 returned error can't find the container with id a479d0ec16ddae19ce4e8dfd7aa25c6a03ca6fe6432cdb262b7081a048b3f0c3 Feb 19 14:54:36 crc kubenswrapper[4861]: I0219 14:54:36.666360 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6vzs"] Feb 19 14:54:37 crc kubenswrapper[4861]: I0219 14:54:37.249833 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d645b4cf8-qqgr8" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Feb 19 14:54:37 crc kubenswrapper[4861]: I0219 14:54:37.250152 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:54:37 crc kubenswrapper[4861]: I0219 14:54:37.479623 4861 generic.go:334] "Generic (PLEG): container finished" podID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerID="257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd" exitCode=0 Feb 19 14:54:37 crc kubenswrapper[4861]: I0219 14:54:37.479705 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerDied","Data":"257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd"} Feb 19 14:54:37 crc kubenswrapper[4861]: I0219 14:54:37.480245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerStarted","Data":"a479d0ec16ddae19ce4e8dfd7aa25c6a03ca6fe6432cdb262b7081a048b3f0c3"} Feb 19 14:54:37 crc kubenswrapper[4861]: I0219 14:54:37.998399 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed637d9-18f3-4aea-86fe-b4071981fd44" path="/var/lib/kubelet/pods/1ed637d9-18f3-4aea-86fe-b4071981fd44/volumes" Feb 19 14:54:38 crc kubenswrapper[4861]: I0219 14:54:38.000001 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c931adc9-ac11-474c-ad7a-136c87e409d8" path="/var/lib/kubelet/pods/c931adc9-ac11-474c-ad7a-136c87e409d8/volumes" Feb 19 14:54:38 crc kubenswrapper[4861]: I0219 14:54:38.496004 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerStarted","Data":"e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c"} Feb 19 14:54:38 crc kubenswrapper[4861]: I0219 14:54:38.975353 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:39 crc kubenswrapper[4861]: I0219 14:54:39.055148 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:39 crc kubenswrapper[4861]: I0219 14:54:39.509348 4861 generic.go:334] "Generic (PLEG): container finished" podID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerID="e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c" exitCode=0 Feb 19 14:54:39 crc kubenswrapper[4861]: I0219 14:54:39.509396 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerDied","Data":"e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c"} Feb 19 14:54:40 crc kubenswrapper[4861]: I0219 14:54:40.552134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerStarted","Data":"f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc"} Feb 19 14:54:40 crc kubenswrapper[4861]: I0219 14:54:40.571953 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6vzs" podStartSLOduration=3.143913154 podStartE2EDuration="5.571930155s" podCreationTimestamp="2026-02-19 14:54:35 +0000 UTC" firstStartedPulling="2026-02-19 14:54:37.483645225 +0000 UTC m=+6292.144748493" lastFinishedPulling="2026-02-19 14:54:39.911662266 +0000 UTC m=+6294.572765494" observedRunningTime="2026-02-19 14:54:40.568993986 +0000 UTC m=+6295.230097224" watchObservedRunningTime="2026-02-19 14:54:40.571930155 +0000 UTC m=+6295.233033403" Feb 19 14:54:41 crc kubenswrapper[4861]: I0219 14:54:41.409925 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jch5b"] Feb 19 14:54:41 crc kubenswrapper[4861]: I0219 14:54:41.410206 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jch5b" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="registry-server" containerID="cri-o://bcd6306b7e2931f03558731fee4a7f1dc92f8b7d9aeaa43ba1cc0d3a012bd159" gracePeriod=2 Feb 19 14:54:41 crc kubenswrapper[4861]: I0219 14:54:41.570145 4861 generic.go:334] "Generic (PLEG): container finished" podID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerID="bcd6306b7e2931f03558731fee4a7f1dc92f8b7d9aeaa43ba1cc0d3a012bd159" exitCode=0 Feb 19 14:54:41 crc kubenswrapper[4861]: I0219 14:54:41.570229 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerDied","Data":"bcd6306b7e2931f03558731fee4a7f1dc92f8b7d9aeaa43ba1cc0d3a012bd159"} Feb 19 14:54:41 crc kubenswrapper[4861]: I0219 14:54:41.982502 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.070737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-catalog-content\") pod \"da61b4f6-3fcc-4b75-8d31-3db35c816789\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.070804 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-utilities\") pod \"da61b4f6-3fcc-4b75-8d31-3db35c816789\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.070929 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjj7m\" (UniqueName: \"kubernetes.io/projected/da61b4f6-3fcc-4b75-8d31-3db35c816789-kube-api-access-jjj7m\") pod \"da61b4f6-3fcc-4b75-8d31-3db35c816789\" (UID: \"da61b4f6-3fcc-4b75-8d31-3db35c816789\") " Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.071335 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-utilities" (OuterVolumeSpecName: "utilities") pod "da61b4f6-3fcc-4b75-8d31-3db35c816789" (UID: "da61b4f6-3fcc-4b75-8d31-3db35c816789"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.071943 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.078111 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da61b4f6-3fcc-4b75-8d31-3db35c816789-kube-api-access-jjj7m" (OuterVolumeSpecName: "kube-api-access-jjj7m") pod "da61b4f6-3fcc-4b75-8d31-3db35c816789" (UID: "da61b4f6-3fcc-4b75-8d31-3db35c816789"). InnerVolumeSpecName "kube-api-access-jjj7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.125234 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da61b4f6-3fcc-4b75-8d31-3db35c816789" (UID: "da61b4f6-3fcc-4b75-8d31-3db35c816789"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.174200 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjj7m\" (UniqueName: \"kubernetes.io/projected/da61b4f6-3fcc-4b75-8d31-3db35c816789-kube-api-access-jjj7m\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.174234 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da61b4f6-3fcc-4b75-8d31-3db35c816789-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.561042 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-59b7698fb8-spssh" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.584938 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jch5b" event={"ID":"da61b4f6-3fcc-4b75-8d31-3db35c816789","Type":"ContainerDied","Data":"dd252293615c54c4d9469ad30229ccae2df836c0bd98a31050f261877fa0506b"} Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.585016 4861 scope.go:117] "RemoveContainer" containerID="bcd6306b7e2931f03558731fee4a7f1dc92f8b7d9aeaa43ba1cc0d3a012bd159" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.585204 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jch5b" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.631200 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jch5b"] Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.637395 4861 scope.go:117] "RemoveContainer" containerID="b583b6ac31669181e3d79d80f87e93f3feb2fc48451d684974c3652b6b7336cc" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.655386 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-79677cdc5d-w6ptg"] Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.655696 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-79677cdc5d-w6ptg" podUID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" containerName="heat-engine" containerID="cri-o://363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" gracePeriod=60 Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.681742 4861 scope.go:117] "RemoveContainer" containerID="8df0f7de4587ed318d4ecbc670086e0d8d1f6a56b9db99fe2abbf232fece99cd" Feb 19 14:54:42 crc kubenswrapper[4861]: I0219 14:54:42.683133 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jch5b"] Feb 19 14:54:43 crc kubenswrapper[4861]: I0219 14:54:43.977395 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:54:43 crc kubenswrapper[4861]: E0219 14:54:43.977894 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:54:43 crc kubenswrapper[4861]: I0219 14:54:43.991115 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" path="/var/lib/kubelet/pods/da61b4f6-3fcc-4b75-8d31-3db35c816789/volumes" Feb 19 14:54:45 crc kubenswrapper[4861]: E0219 14:54:45.053712 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 14:54:45 crc kubenswrapper[4861]: E0219 14:54:45.055658 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 14:54:45 crc kubenswrapper[4861]: E0219 14:54:45.060719 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 14:54:45 crc kubenswrapper[4861]: E0219 14:54:45.060812 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-79677cdc5d-w6ptg" podUID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" containerName="heat-engine" Feb 19 14:54:46 crc kubenswrapper[4861]: I0219 14:54:46.202118 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:46 crc kubenswrapper[4861]: I0219 14:54:46.202564 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:46 crc kubenswrapper[4861]: I0219 14:54:46.272129 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:46 crc kubenswrapper[4861]: I0219 14:54:46.715257 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:46 crc kubenswrapper[4861]: I0219 14:54:46.784441 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6vzs"] Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.142956 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.282205 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnbts\" (UniqueName: \"kubernetes.io/projected/c5d6a437-b768-4701-a69a-1b99fd4f2626-kube-api-access-nnbts\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.282761 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-tls-certs\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.283009 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-combined-ca-bundle\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.283063 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d6a437-b768-4701-a69a-1b99fd4f2626-logs\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.283118 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-scripts\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.283225 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-config-data\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.283299 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-secret-key\") pod \"c5d6a437-b768-4701-a69a-1b99fd4f2626\" (UID: \"c5d6a437-b768-4701-a69a-1b99fd4f2626\") " Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.283606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d6a437-b768-4701-a69a-1b99fd4f2626-logs" (OuterVolumeSpecName: "logs") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.284146 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d6a437-b768-4701-a69a-1b99fd4f2626-logs\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.288032 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d6a437-b768-4701-a69a-1b99fd4f2626-kube-api-access-nnbts" (OuterVolumeSpecName: "kube-api-access-nnbts") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "kube-api-access-nnbts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.291120 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.330934 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-scripts" (OuterVolumeSpecName: "scripts") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.331010 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-config-data" (OuterVolumeSpecName: "config-data") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.339991 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.381989 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c5d6a437-b768-4701-a69a-1b99fd4f2626" (UID: "c5d6a437-b768-4701-a69a-1b99fd4f2626"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.385210 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnbts\" (UniqueName: \"kubernetes.io/projected/c5d6a437-b768-4701-a69a-1b99fd4f2626-kube-api-access-nnbts\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.385259 4861 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.385272 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.385287 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.385299 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5d6a437-b768-4701-a69a-1b99fd4f2626-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.385313 4861 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5d6a437-b768-4701-a69a-1b99fd4f2626-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.639671 4861 generic.go:334] "Generic (PLEG): container finished" podID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerID="b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a" exitCode=137 Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.639777 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d645b4cf8-qqgr8" event={"ID":"c5d6a437-b768-4701-a69a-1b99fd4f2626","Type":"ContainerDied","Data":"b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a"} Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.639795 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d645b4cf8-qqgr8" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.639863 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d645b4cf8-qqgr8" event={"ID":"c5d6a437-b768-4701-a69a-1b99fd4f2626","Type":"ContainerDied","Data":"810182c6d62c954ffe956920b4b5faff4f4469845dce68a6b231a1a3fc3d63db"} Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.639924 4861 scope.go:117] "RemoveContainer" containerID="7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.701081 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d645b4cf8-qqgr8"] Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.712245 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d645b4cf8-qqgr8"] Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.844911 4861 scope.go:117] "RemoveContainer" containerID="b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.872742 4861 scope.go:117] "RemoveContainer" containerID="7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced" Feb 19 14:54:47 crc kubenswrapper[4861]: E0219 14:54:47.873246 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced\": container with ID starting with 7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced not found: ID does not exist" containerID="7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.873290 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced"} err="failed to get container status \"7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced\": rpc error: code = NotFound desc = could not find container \"7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced\": container with ID starting with 7ca64dd523d4823b18b23e28010117f55bb377e32b186df029f27efdb4175ced not found: ID does not exist" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.873322 4861 scope.go:117] "RemoveContainer" containerID="b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a" Feb 19 14:54:47 crc kubenswrapper[4861]: E0219 14:54:47.873904 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a\": container with ID starting with b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a not found: ID does not exist" containerID="b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.873938 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a"} err="failed to get container status \"b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a\": rpc error: code = NotFound desc = could not find container \"b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a\": container with ID starting with b31cfd2421fc72d50e0a9647eaa0599eb340c2dd7e67c6eae7b00f130d46564a not found: ID does not exist" Feb 19 14:54:47 crc kubenswrapper[4861]: I0219 14:54:47.990697 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" path="/var/lib/kubelet/pods/c5d6a437-b768-4701-a69a-1b99fd4f2626/volumes" Feb 19 14:54:48 crc kubenswrapper[4861]: I0219 14:54:48.669479 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6vzs" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="registry-server" containerID="cri-o://f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc" gracePeriod=2 Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.224170 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.334741 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd76t\" (UniqueName: \"kubernetes.io/projected/6d577b17-c986-4d40-8471-8fcc3048ecb5-kube-api-access-cd76t\") pod \"6d577b17-c986-4d40-8471-8fcc3048ecb5\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.334876 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-utilities\") pod \"6d577b17-c986-4d40-8471-8fcc3048ecb5\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.334950 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-catalog-content\") pod \"6d577b17-c986-4d40-8471-8fcc3048ecb5\" (UID: \"6d577b17-c986-4d40-8471-8fcc3048ecb5\") " Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.335766 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-utilities" (OuterVolumeSpecName: "utilities") pod "6d577b17-c986-4d40-8471-8fcc3048ecb5" (UID: "6d577b17-c986-4d40-8471-8fcc3048ecb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.353676 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d577b17-c986-4d40-8471-8fcc3048ecb5-kube-api-access-cd76t" (OuterVolumeSpecName: "kube-api-access-cd76t") pod "6d577b17-c986-4d40-8471-8fcc3048ecb5" (UID: "6d577b17-c986-4d40-8471-8fcc3048ecb5"). InnerVolumeSpecName "kube-api-access-cd76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.421149 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d577b17-c986-4d40-8471-8fcc3048ecb5" (UID: "6d577b17-c986-4d40-8471-8fcc3048ecb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.436743 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.436774 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d577b17-c986-4d40-8471-8fcc3048ecb5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.436784 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd76t\" (UniqueName: \"kubernetes.io/projected/6d577b17-c986-4d40-8471-8fcc3048ecb5-kube-api-access-cd76t\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.680061 4861 generic.go:334] "Generic (PLEG): container finished" podID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerID="f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc" exitCode=0 Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.680152 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerDied","Data":"f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc"} Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.680180 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6vzs" event={"ID":"6d577b17-c986-4d40-8471-8fcc3048ecb5","Type":"ContainerDied","Data":"a479d0ec16ddae19ce4e8dfd7aa25c6a03ca6fe6432cdb262b7081a048b3f0c3"} Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.680177 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6vzs" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.680286 4861 scope.go:117] "RemoveContainer" containerID="f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.721629 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6vzs"] Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.726933 4861 scope.go:117] "RemoveContainer" containerID="e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.730336 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6vzs"] Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.766900 4861 scope.go:117] "RemoveContainer" containerID="257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.794945 4861 scope.go:117] "RemoveContainer" containerID="f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc" Feb 19 14:54:49 crc kubenswrapper[4861]: E0219 14:54:49.795325 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc\": container with ID starting with f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc not found: ID does not exist" containerID="f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.795378 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc"} err="failed to get container status \"f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc\": rpc error: code = NotFound desc = could not find container \"f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc\": container with ID starting with f91ecc58a58593f3535b3c0bb6198d23d91e88e73904a1903e631deced2b35bc not found: ID does not exist" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.795408 4861 scope.go:117] "RemoveContainer" containerID="e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c" Feb 19 14:54:49 crc kubenswrapper[4861]: E0219 14:54:49.795670 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c\": container with ID starting with e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c not found: ID does not exist" containerID="e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.795691 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c"} err="failed to get container status \"e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c\": rpc error: code = NotFound desc = could not find container \"e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c\": container with ID starting with e37e6841be2563c3e555c1b6f9fa807926adf353477056aa75974dce3a38cb6c not found: ID does not exist" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.795709 4861 scope.go:117] "RemoveContainer" containerID="257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd" Feb 19 14:54:49 crc kubenswrapper[4861]: E0219 14:54:49.795951 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd\": container with ID starting with 257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd not found: ID does not exist" containerID="257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.795996 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd"} err="failed to get container status \"257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd\": rpc error: code = NotFound desc = could not find container \"257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd\": container with ID starting with 257d002dfc16997424ba48c38251c357594885439cdf85bba72a5fba57f52ecd not found: ID does not exist" Feb 19 14:54:49 crc kubenswrapper[4861]: I0219 14:54:49.997158 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" path="/var/lib/kubelet/pods/6d577b17-c986-4d40-8471-8fcc3048ecb5/volumes" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.067142 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-23f0-account-create-update-46kcp"] Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.084902 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-23f0-account-create-update-46kcp"] Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.616983 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.729322 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data\") pod \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.729560 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-combined-ca-bundle\") pod \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.729658 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgz5s\" (UniqueName: \"kubernetes.io/projected/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-kube-api-access-dgz5s\") pod \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.729984 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data-custom\") pod \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\" (UID: \"d28e5052-f88a-4ac6-bc82-65be89a0d4ce\") " Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.739009 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-kube-api-access-dgz5s" (OuterVolumeSpecName: "kube-api-access-dgz5s") pod "d28e5052-f88a-4ac6-bc82-65be89a0d4ce" (UID: "d28e5052-f88a-4ac6-bc82-65be89a0d4ce"). InnerVolumeSpecName "kube-api-access-dgz5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.739744 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d28e5052-f88a-4ac6-bc82-65be89a0d4ce" (UID: "d28e5052-f88a-4ac6-bc82-65be89a0d4ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.747701 4861 generic.go:334] "Generic (PLEG): container finished" podID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" exitCode=0 Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.747837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79677cdc5d-w6ptg" event={"ID":"d28e5052-f88a-4ac6-bc82-65be89a0d4ce","Type":"ContainerDied","Data":"363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094"} Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.747883 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79677cdc5d-w6ptg" event={"ID":"d28e5052-f88a-4ac6-bc82-65be89a0d4ce","Type":"ContainerDied","Data":"448c7266f694489f494b72768e58cb9bd53b36ce0172dc7b6b92dbc08ab35323"} Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.747955 4861 scope.go:117] "RemoveContainer" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.748306 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79677cdc5d-w6ptg" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.793339 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d28e5052-f88a-4ac6-bc82-65be89a0d4ce" (UID: "d28e5052-f88a-4ac6-bc82-65be89a0d4ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.828063 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data" (OuterVolumeSpecName: "config-data") pod "d28e5052-f88a-4ac6-bc82-65be89a0d4ce" (UID: "d28e5052-f88a-4ac6-bc82-65be89a0d4ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.833352 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.833403 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.833448 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.833468 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgz5s\" (UniqueName: \"kubernetes.io/projected/d28e5052-f88a-4ac6-bc82-65be89a0d4ce-kube-api-access-dgz5s\") on node \"crc\" DevicePath \"\"" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.880292 4861 scope.go:117] "RemoveContainer" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" Feb 19 14:54:54 crc kubenswrapper[4861]: E0219 14:54:54.880889 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094\": container with ID starting with 363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094 not found: ID does not exist" containerID="363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.880947 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094"} err="failed to get container status \"363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094\": rpc error: code = NotFound desc = could not find container \"363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094\": container with ID starting with 363c85e07aaf4b48089bc786b6ea3192bc3836c36c8161ffaf7144de15c7a094 not found: ID does not exist" Feb 19 14:54:54 crc kubenswrapper[4861]: I0219 14:54:54.978795 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:54:54 crc kubenswrapper[4861]: E0219 14:54:54.979576 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.048686 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-242fk"] Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.064271 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-242fk"] Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.112310 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-79677cdc5d-w6ptg"] Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.124198 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-79677cdc5d-w6ptg"] Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.993752 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5727b8cb-677d-413a-84f4-370e89e58665" path="/var/lib/kubelet/pods/5727b8cb-677d-413a-84f4-370e89e58665/volumes" Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.997060 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756d209a-0de4-4605-a66d-d772d75bcee8" path="/var/lib/kubelet/pods/756d209a-0de4-4605-a66d-d772d75bcee8/volumes" Feb 19 14:54:55 crc kubenswrapper[4861]: I0219 14:54:55.998842 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" path="/var/lib/kubelet/pods/d28e5052-f88a-4ac6-bc82-65be89a0d4ce/volumes" Feb 19 14:55:02 crc kubenswrapper[4861]: I0219 14:55:02.043693 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vckzg"] Feb 19 14:55:02 crc kubenswrapper[4861]: I0219 14:55:02.063677 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vckzg"] Feb 19 14:55:03 crc kubenswrapper[4861]: I0219 14:55:03.992762 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2117b6a7-23dd-4679-8860-ff0545229385" path="/var/lib/kubelet/pods/2117b6a7-23dd-4679-8860-ff0545229385/volumes" Feb 19 14:55:07 crc kubenswrapper[4861]: I0219 14:55:07.823777 4861 scope.go:117] "RemoveContainer" containerID="7420bd8a4b2b558c95f92511194f5301c6efc3ee92ad89c472eaccd9d4c6cf96" Feb 19 14:55:07 crc kubenswrapper[4861]: I0219 14:55:07.882046 4861 scope.go:117] "RemoveContainer" containerID="9c6bdca29d06f28e535a56722429724b038d7b7a0110e36d8c533ed82314345e" Feb 19 14:55:07 crc kubenswrapper[4861]: I0219 14:55:07.985898 4861 scope.go:117] "RemoveContainer" containerID="312efb867dfa595de92c5bb6ec280afe469def8361e0ce5a48029d7e5a432278" Feb 19 14:55:09 crc kubenswrapper[4861]: I0219 14:55:09.978825 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:55:10 crc kubenswrapper[4861]: I0219 14:55:10.949575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"7d003f985eed6ad9b7bc08457548854ca76bfd5dbc339051aa16497b1baa9fde"} Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.026644 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x"] Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027580 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="extract-content" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027593 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="extract-content" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027607 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="extract-content" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027614 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="extract-content" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027629 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="extract-utilities" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027636 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="extract-utilities" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027648 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="registry-server" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027653 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="registry-server" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027665 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027670 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027680 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="registry-server" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027689 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="registry-server" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027703 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="extract-utilities" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027709 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="extract-utilities" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027719 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon-log" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027725 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon-log" Feb 19 14:55:15 crc kubenswrapper[4861]: E0219 14:55:15.027743 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" containerName="heat-engine" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027748 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" containerName="heat-engine" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027926 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da61b4f6-3fcc-4b75-8d31-3db35c816789" containerName="registry-server" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027940 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027948 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d577b17-c986-4d40-8471-8fcc3048ecb5" containerName="registry-server" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027973 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28e5052-f88a-4ac6-bc82-65be89a0d4ce" containerName="heat-engine" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.027983 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d6a437-b768-4701-a69a-1b99fd4f2626" containerName="horizon-log" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.029330 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.031069 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.040670 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x"] Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.119898 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.120170 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmhw\" (UniqueName: \"kubernetes.io/projected/d57799aa-811d-48a0-b770-933ac731596d-kube-api-access-wsmhw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.120527 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.223220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmhw\" (UniqueName: \"kubernetes.io/projected/d57799aa-811d-48a0-b770-933ac731596d-kube-api-access-wsmhw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.223368 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.223469 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.223801 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.223856 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.247273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmhw\" (UniqueName: \"kubernetes.io/projected/d57799aa-811d-48a0-b770-933ac731596d-kube-api-access-wsmhw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.362839 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:15 crc kubenswrapper[4861]: I0219 14:55:15.907301 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x"] Feb 19 14:55:16 crc kubenswrapper[4861]: I0219 14:55:16.037341 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" event={"ID":"d57799aa-811d-48a0-b770-933ac731596d","Type":"ContainerStarted","Data":"280f7c70e96a7fa7ba434b9c2fe61b780af680fc0a9f3e30fc163760414ee0fc"} Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.056778 4861 generic.go:334] "Generic (PLEG): container finished" podID="d57799aa-811d-48a0-b770-933ac731596d" containerID="16c47740b2412ff189a61650c05a687070a705087b3ce9b6e0284acce7c4257d" exitCode=0 Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.056901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" event={"ID":"d57799aa-811d-48a0-b770-933ac731596d","Type":"ContainerDied","Data":"16c47740b2412ff189a61650c05a687070a705087b3ce9b6e0284acce7c4257d"} Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.353495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9xnc"] Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.357211 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.401007 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9xnc"] Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.476808 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-catalog-content\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.477014 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxnv\" (UniqueName: \"kubernetes.io/projected/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-kube-api-access-fdxnv\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.477074 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-utilities\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.579169 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-utilities\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.579275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-catalog-content\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.579361 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxnv\" (UniqueName: \"kubernetes.io/projected/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-kube-api-access-fdxnv\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.580377 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-utilities\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.580386 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-catalog-content\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.607611 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxnv\" (UniqueName: \"kubernetes.io/projected/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-kube-api-access-fdxnv\") pod \"redhat-operators-t9xnc\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:17 crc kubenswrapper[4861]: I0219 14:55:17.702478 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:18 crc kubenswrapper[4861]: I0219 14:55:18.164120 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9xnc"] Feb 19 14:55:19 crc kubenswrapper[4861]: I0219 14:55:19.083832 4861 generic.go:334] "Generic (PLEG): container finished" podID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerID="0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa" exitCode=0 Feb 19 14:55:19 crc kubenswrapper[4861]: I0219 14:55:19.084453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerDied","Data":"0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa"} Feb 19 14:55:19 crc kubenswrapper[4861]: I0219 14:55:19.084482 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerStarted","Data":"0b6ced28939a55d50c8e54c62694bdadc8578798d06cd603ba386b9997c0da2f"} Feb 19 14:55:19 crc kubenswrapper[4861]: I0219 14:55:19.108678 4861 generic.go:334] "Generic (PLEG): container finished" podID="d57799aa-811d-48a0-b770-933ac731596d" containerID="4927d5e73ee70c551a1000baa643594b417e6201bc79b42430a70271ac6f8510" exitCode=0 Feb 19 14:55:19 crc kubenswrapper[4861]: I0219 14:55:19.108723 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" event={"ID":"d57799aa-811d-48a0-b770-933ac731596d","Type":"ContainerDied","Data":"4927d5e73ee70c551a1000baa643594b417e6201bc79b42430a70271ac6f8510"} Feb 19 14:55:20 crc kubenswrapper[4861]: I0219 14:55:20.149776 4861 generic.go:334] "Generic (PLEG): container finished" podID="d57799aa-811d-48a0-b770-933ac731596d" containerID="e16d2e6e5a39bafc08d30ecf27670fea5bf3a8bdc2bb15db46c2183771ebae6f" exitCode=0 Feb 19 14:55:20 crc kubenswrapper[4861]: I0219 14:55:20.149901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" event={"ID":"d57799aa-811d-48a0-b770-933ac731596d","Type":"ContainerDied","Data":"e16d2e6e5a39bafc08d30ecf27670fea5bf3a8bdc2bb15db46c2183771ebae6f"} Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.164741 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerStarted","Data":"f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968"} Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.667091 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.805526 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsmhw\" (UniqueName: \"kubernetes.io/projected/d57799aa-811d-48a0-b770-933ac731596d-kube-api-access-wsmhw\") pod \"d57799aa-811d-48a0-b770-933ac731596d\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.805641 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-bundle\") pod \"d57799aa-811d-48a0-b770-933ac731596d\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.805837 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-util\") pod \"d57799aa-811d-48a0-b770-933ac731596d\" (UID: \"d57799aa-811d-48a0-b770-933ac731596d\") " Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.809880 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-bundle" (OuterVolumeSpecName: "bundle") pod "d57799aa-811d-48a0-b770-933ac731596d" (UID: "d57799aa-811d-48a0-b770-933ac731596d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.814281 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57799aa-811d-48a0-b770-933ac731596d-kube-api-access-wsmhw" (OuterVolumeSpecName: "kube-api-access-wsmhw") pod "d57799aa-811d-48a0-b770-933ac731596d" (UID: "d57799aa-811d-48a0-b770-933ac731596d"). InnerVolumeSpecName "kube-api-access-wsmhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.815262 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-util" (OuterVolumeSpecName: "util") pod "d57799aa-811d-48a0-b770-933ac731596d" (UID: "d57799aa-811d-48a0-b770-933ac731596d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.908150 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-util\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.908187 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsmhw\" (UniqueName: \"kubernetes.io/projected/d57799aa-811d-48a0-b770-933ac731596d-kube-api-access-wsmhw\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:21 crc kubenswrapper[4861]: I0219 14:55:21.908202 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d57799aa-811d-48a0-b770-933ac731596d-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:22 crc kubenswrapper[4861]: I0219 14:55:22.179054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" event={"ID":"d57799aa-811d-48a0-b770-933ac731596d","Type":"ContainerDied","Data":"280f7c70e96a7fa7ba434b9c2fe61b780af680fc0a9f3e30fc163760414ee0fc"} Feb 19 14:55:22 crc kubenswrapper[4861]: I0219 14:55:22.179129 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="280f7c70e96a7fa7ba434b9c2fe61b780af680fc0a9f3e30fc163760414ee0fc" Feb 19 14:55:22 crc kubenswrapper[4861]: I0219 14:55:22.179163 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x" Feb 19 14:55:23 crc kubenswrapper[4861]: I0219 14:55:23.190054 4861 generic.go:334] "Generic (PLEG): container finished" podID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerID="f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968" exitCode=0 Feb 19 14:55:23 crc kubenswrapper[4861]: I0219 14:55:23.190095 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerDied","Data":"f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968"} Feb 19 14:55:24 crc kubenswrapper[4861]: I0219 14:55:24.201110 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerStarted","Data":"fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291"} Feb 19 14:55:24 crc kubenswrapper[4861]: I0219 14:55:24.220768 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9xnc" podStartSLOduration=2.71625287 podStartE2EDuration="7.22074634s" podCreationTimestamp="2026-02-19 14:55:17 +0000 UTC" firstStartedPulling="2026-02-19 14:55:19.086291616 +0000 UTC m=+6333.747394844" lastFinishedPulling="2026-02-19 14:55:23.590785066 +0000 UTC m=+6338.251888314" observedRunningTime="2026-02-19 14:55:24.217727669 +0000 UTC m=+6338.878830897" watchObservedRunningTime="2026-02-19 14:55:24.22074634 +0000 UTC m=+6338.881849568" Feb 19 14:55:27 crc kubenswrapper[4861]: I0219 14:55:27.703634 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:27 crc kubenswrapper[4861]: I0219 14:55:27.704214 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:55:28 crc kubenswrapper[4861]: I0219 14:55:28.785529 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9xnc" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" probeResult="failure" output=< Feb 19 14:55:28 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:55:28 crc kubenswrapper[4861]: > Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.300245 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t"] Feb 19 14:55:31 crc kubenswrapper[4861]: E0219 14:55:31.301180 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="util" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.301198 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="util" Feb 19 14:55:31 crc kubenswrapper[4861]: E0219 14:55:31.301244 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="pull" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.301252 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="pull" Feb 19 14:55:31 crc kubenswrapper[4861]: E0219 14:55:31.301272 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="extract" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.301282 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="extract" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.301564 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57799aa-811d-48a0-b770-933ac731596d" containerName="extract" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.302268 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.307449 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.307450 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.307626 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-h2lns" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.317329 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.404974 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w658x\" (UniqueName: \"kubernetes.io/projected/381bc7a3-1600-4929-8cac-506015cf9319-kube-api-access-w658x\") pod \"obo-prometheus-operator-68bc856cb9-kjc7t\" (UID: \"381bc7a3-1600-4929-8cac-506015cf9319\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.450251 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.451713 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.453470 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.454082 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qv8jb" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.460237 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.461539 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.496025 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.506634 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w658x\" (UniqueName: \"kubernetes.io/projected/381bc7a3-1600-4929-8cac-506015cf9319-kube-api-access-w658x\") pod \"obo-prometheus-operator-68bc856cb9-kjc7t\" (UID: \"381bc7a3-1600-4929-8cac-506015cf9319\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.506796 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b85a-81c0-4529-89f6-07363a95082c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw\" (UID: \"2ce4b85a-81c0-4529-89f6-07363a95082c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.506874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b85a-81c0-4529-89f6-07363a95082c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw\" (UID: \"2ce4b85a-81c0-4529-89f6-07363a95082c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.540590 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.542554 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w658x\" (UniqueName: \"kubernetes.io/projected/381bc7a3-1600-4929-8cac-506015cf9319-kube-api-access-w658x\") pod \"obo-prometheus-operator-68bc856cb9-kjc7t\" (UID: \"381bc7a3-1600-4929-8cac-506015cf9319\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.608415 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b85a-81c0-4529-89f6-07363a95082c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw\" (UID: \"2ce4b85a-81c0-4529-89f6-07363a95082c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.608573 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/626c108e-f677-42ae-a266-0920c5896f3e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q\" (UID: \"626c108e-f677-42ae-a266-0920c5896f3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.608620 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/626c108e-f677-42ae-a266-0920c5896f3e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q\" (UID: \"626c108e-f677-42ae-a266-0920c5896f3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.609183 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b85a-81c0-4529-89f6-07363a95082c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw\" (UID: \"2ce4b85a-81c0-4529-89f6-07363a95082c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.611485 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b85a-81c0-4529-89f6-07363a95082c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw\" (UID: \"2ce4b85a-81c0-4529-89f6-07363a95082c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.614793 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b85a-81c0-4529-89f6-07363a95082c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw\" (UID: \"2ce4b85a-81c0-4529-89f6-07363a95082c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.629480 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-46kgq"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.630708 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.632505 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-mq9r7" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.632620 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.647914 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-46kgq"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.675252 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.710906 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhjw\" (UniqueName: \"kubernetes.io/projected/13c3b5b3-f000-4fce-b836-14a28771110f-kube-api-access-tkhjw\") pod \"observability-operator-59bdc8b94-46kgq\" (UID: \"13c3b5b3-f000-4fce-b836-14a28771110f\") " pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.710972 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/626c108e-f677-42ae-a266-0920c5896f3e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q\" (UID: \"626c108e-f677-42ae-a266-0920c5896f3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.711002 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/626c108e-f677-42ae-a266-0920c5896f3e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q\" (UID: \"626c108e-f677-42ae-a266-0920c5896f3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.711033 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/13c3b5b3-f000-4fce-b836-14a28771110f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-46kgq\" (UID: \"13c3b5b3-f000-4fce-b836-14a28771110f\") " pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.714292 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/626c108e-f677-42ae-a266-0920c5896f3e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q\" (UID: \"626c108e-f677-42ae-a266-0920c5896f3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.715352 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/626c108e-f677-42ae-a266-0920c5896f3e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q\" (UID: \"626c108e-f677-42ae-a266-0920c5896f3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.758467 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jts87"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.768301 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.773890 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-72gks" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.777290 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.798505 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.799555 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jts87"] Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.818599 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/13c3b5b3-f000-4fce-b836-14a28771110f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-46kgq\" (UID: \"13c3b5b3-f000-4fce-b836-14a28771110f\") " pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.818775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkhjw\" (UniqueName: \"kubernetes.io/projected/13c3b5b3-f000-4fce-b836-14a28771110f-kube-api-access-tkhjw\") pod \"observability-operator-59bdc8b94-46kgq\" (UID: \"13c3b5b3-f000-4fce-b836-14a28771110f\") " pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.833866 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/13c3b5b3-f000-4fce-b836-14a28771110f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-46kgq\" (UID: \"13c3b5b3-f000-4fce-b836-14a28771110f\") " pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.878590 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkhjw\" (UniqueName: \"kubernetes.io/projected/13c3b5b3-f000-4fce-b836-14a28771110f-kube-api-access-tkhjw\") pod \"observability-operator-59bdc8b94-46kgq\" (UID: \"13c3b5b3-f000-4fce-b836-14a28771110f\") " pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.921112 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwhm\" (UniqueName: \"kubernetes.io/projected/0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530-kube-api-access-7fwhm\") pod \"perses-operator-5bf474d74f-jts87\" (UID: \"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530\") " pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:31 crc kubenswrapper[4861]: I0219 14:55:31.921205 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jts87\" (UID: \"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530\") " pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.026607 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jts87\" (UID: \"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530\") " pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.026768 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwhm\" (UniqueName: \"kubernetes.io/projected/0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530-kube-api-access-7fwhm\") pod \"perses-operator-5bf474d74f-jts87\" (UID: \"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530\") " pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.028113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jts87\" (UID: \"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530\") " pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.054068 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwhm\" (UniqueName: \"kubernetes.io/projected/0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530-kube-api-access-7fwhm\") pod \"perses-operator-5bf474d74f-jts87\" (UID: \"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530\") " pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.162145 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.175001 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.235668 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t"] Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.267182 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.547780 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q"] Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.734727 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw"] Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.793591 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jts87"] Feb 19 14:55:32 crc kubenswrapper[4861]: W0219 14:55:32.793620 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca60b71_ca8b_4ac1_b2a0_a52bd45fd530.slice/crio-4d7b6085f949a57c467198e7e6a0879f21dfe6a4901af9adecf4453db7ce851b WatchSource:0}: Error finding container 4d7b6085f949a57c467198e7e6a0879f21dfe6a4901af9adecf4453db7ce851b: Status 404 returned error can't find the container with id 4d7b6085f949a57c467198e7e6a0879f21dfe6a4901af9adecf4453db7ce851b Feb 19 14:55:32 crc kubenswrapper[4861]: I0219 14:55:32.893383 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-46kgq"] Feb 19 14:55:32 crc kubenswrapper[4861]: W0219 14:55:32.898187 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c3b5b3_f000_4fce_b836_14a28771110f.slice/crio-2b0652a320ead4f62fe4fcbf9f54e7b60161b61fe52015e7e4556103f79b164b WatchSource:0}: Error finding container 2b0652a320ead4f62fe4fcbf9f54e7b60161b61fe52015e7e4556103f79b164b: Status 404 returned error can't find the container with id 2b0652a320ead4f62fe4fcbf9f54e7b60161b61fe52015e7e4556103f79b164b Feb 19 14:55:33 crc kubenswrapper[4861]: I0219 14:55:33.292842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" event={"ID":"626c108e-f677-42ae-a266-0920c5896f3e","Type":"ContainerStarted","Data":"1b9aaa9fdbcaffbf2ac6c54548a83079c6fae93462f9b40fa1ac81da77638629"} Feb 19 14:55:33 crc kubenswrapper[4861]: I0219 14:55:33.295177 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jts87" event={"ID":"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530","Type":"ContainerStarted","Data":"4d7b6085f949a57c467198e7e6a0879f21dfe6a4901af9adecf4453db7ce851b"} Feb 19 14:55:33 crc kubenswrapper[4861]: I0219 14:55:33.297011 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" event={"ID":"2ce4b85a-81c0-4529-89f6-07363a95082c","Type":"ContainerStarted","Data":"b80a2da7a93f6780a91a62025339d5f7cc65c03667963a815bb65a3e94ca3e84"} Feb 19 14:55:33 crc kubenswrapper[4861]: I0219 14:55:33.298180 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" event={"ID":"381bc7a3-1600-4929-8cac-506015cf9319","Type":"ContainerStarted","Data":"f8107e05ca26f8116a157eb9e41ab266600773af339616be3fdf0bc6279f4e15"} Feb 19 14:55:33 crc kubenswrapper[4861]: I0219 14:55:33.299249 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" event={"ID":"13c3b5b3-f000-4fce-b836-14a28771110f","Type":"ContainerStarted","Data":"2b0652a320ead4f62fe4fcbf9f54e7b60161b61fe52015e7e4556103f79b164b"} Feb 19 14:55:38 crc kubenswrapper[4861]: I0219 14:55:38.754803 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9xnc" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" probeResult="failure" output=< Feb 19 14:55:38 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:55:38 crc kubenswrapper[4861]: > Feb 19 14:55:46 crc kubenswrapper[4861]: E0219 14:55:46.094609 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Feb 19 14:55:46 crc kubenswrapper[4861]: E0219 14:55:46.095442 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw_openshift-operators(2ce4b85a-81c0-4529-89f6-07363a95082c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 14:55:46 crc kubenswrapper[4861]: E0219 14:55:46.096841 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" podUID="2ce4b85a-81c0-4529-89f6-07363a95082c" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.449064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" event={"ID":"2ce4b85a-81c0-4529-89f6-07363a95082c","Type":"ContainerStarted","Data":"58adafbd69f0dbaaff8cae4750ed2979e7444e5c7e18d41cc161b1d83b5ce6d6"} Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.474148 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" event={"ID":"381bc7a3-1600-4929-8cac-506015cf9319","Type":"ContainerStarted","Data":"6accf61e03abd254f224f3a5baf882014461d37fb078640fab640f4e3ea23151"} Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.485880 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" event={"ID":"13c3b5b3-f000-4fce-b836-14a28771110f","Type":"ContainerStarted","Data":"12079494ff9ab55b6955257dd7855b126cc4526d502e6ef53d112897ede086e4"} Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.486093 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.488479 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.492945 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" event={"ID":"626c108e-f677-42ae-a266-0920c5896f3e","Type":"ContainerStarted","Data":"f5b2bdfe89322c53016a1bf9751e13da450c1fd43f41bd2bfd49013ce7d70922"} Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.495122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jts87" event={"ID":"0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530","Type":"ContainerStarted","Data":"11dc18a172bd7eeee7cf449457ba8621278fa5c87a915e5a005573a0133c394e"} Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.495313 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.520248 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw" podStartSLOduration=-9223372020.334545 podStartE2EDuration="16.520230426s" podCreationTimestamp="2026-02-19 14:55:31 +0000 UTC" firstStartedPulling="2026-02-19 14:55:32.753877602 +0000 UTC m=+6347.414980830" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:55:47.502078329 +0000 UTC m=+6362.163181567" watchObservedRunningTime="2026-02-19 14:55:47.520230426 +0000 UTC m=+6362.181333654" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.556631 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q" podStartSLOduration=2.946468862 podStartE2EDuration="16.556615234s" podCreationTimestamp="2026-02-19 14:55:31 +0000 UTC" firstStartedPulling="2026-02-19 14:55:32.544520957 +0000 UTC m=+6347.205624185" lastFinishedPulling="2026-02-19 14:55:46.154667289 +0000 UTC m=+6360.815770557" observedRunningTime="2026-02-19 14:55:47.552115383 +0000 UTC m=+6362.213218611" watchObservedRunningTime="2026-02-19 14:55:47.556615234 +0000 UTC m=+6362.217718462" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.596569 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kjc7t" podStartSLOduration=2.7093430720000002 podStartE2EDuration="16.596546637s" podCreationTimestamp="2026-02-19 14:55:31 +0000 UTC" firstStartedPulling="2026-02-19 14:55:32.266974181 +0000 UTC m=+6346.928077409" lastFinishedPulling="2026-02-19 14:55:46.154177736 +0000 UTC m=+6360.815280974" observedRunningTime="2026-02-19 14:55:47.590724181 +0000 UTC m=+6362.251827409" watchObservedRunningTime="2026-02-19 14:55:47.596546637 +0000 UTC m=+6362.257649865" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.638790 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jts87" podStartSLOduration=3.259823211 podStartE2EDuration="16.638769931s" podCreationTimestamp="2026-02-19 14:55:31 +0000 UTC" firstStartedPulling="2026-02-19 14:55:32.795949712 +0000 UTC m=+6347.457052940" lastFinishedPulling="2026-02-19 14:55:46.174896422 +0000 UTC m=+6360.835999660" observedRunningTime="2026-02-19 14:55:47.625764082 +0000 UTC m=+6362.286867310" watchObservedRunningTime="2026-02-19 14:55:47.638769931 +0000 UTC m=+6362.299873159" Feb 19 14:55:47 crc kubenswrapper[4861]: I0219 14:55:47.672190 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-46kgq" podStartSLOduration=3.363650051 podStartE2EDuration="16.672174659s" podCreationTimestamp="2026-02-19 14:55:31 +0000 UTC" firstStartedPulling="2026-02-19 14:55:32.902038903 +0000 UTC m=+6347.563142131" lastFinishedPulling="2026-02-19 14:55:46.210563471 +0000 UTC m=+6360.871666739" observedRunningTime="2026-02-19 14:55:47.672022655 +0000 UTC m=+6362.333125883" watchObservedRunningTime="2026-02-19 14:55:47.672174659 +0000 UTC m=+6362.333277887" Feb 19 14:55:48 crc kubenswrapper[4861]: I0219 14:55:48.796011 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9xnc" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" probeResult="failure" output=< Feb 19 14:55:48 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:55:48 crc kubenswrapper[4861]: > Feb 19 14:55:52 crc kubenswrapper[4861]: I0219 14:55:52.178525 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jts87" Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.904355 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.905373 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="5a69b964-cce9-4112-86e5-3984e1706034" containerName="openstackclient" containerID="cri-o://c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960" gracePeriod=2 Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.916489 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.954137 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 14:55:54 crc kubenswrapper[4861]: E0219 14:55:54.954558 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a69b964-cce9-4112-86e5-3984e1706034" containerName="openstackclient" Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.954575 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a69b964-cce9-4112-86e5-3984e1706034" containerName="openstackclient" Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.954986 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a69b964-cce9-4112-86e5-3984e1706034" containerName="openstackclient" Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.955727 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.964942 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5a69b964-cce9-4112-86e5-3984e1706034" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" Feb 19 14:55:54 crc kubenswrapper[4861]: I0219 14:55:54.993207 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.037923 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.038031 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq47t\" (UniqueName: \"kubernetes.io/projected/8f4b4718-0272-409d-8ad4-76114792a8d2-kube-api-access-xq47t\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.038175 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.038221 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.140234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.140298 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq47t\" (UniqueName: \"kubernetes.io/projected/8f4b4718-0272-409d-8ad4-76114792a8d2-kube-api-access-xq47t\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.140361 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.140387 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.141408 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.147106 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.148718 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.164721 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq47t\" (UniqueName: \"kubernetes.io/projected/8f4b4718-0272-409d-8ad4-76114792a8d2-kube-api-access-xq47t\") pod \"openstackclient\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.185468 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.186827 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.189175 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-447lk" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.199113 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.242340 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlk4c\" (UniqueName: \"kubernetes.io/projected/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948-kube-api-access-jlk4c\") pod \"kube-state-metrics-0\" (UID: \"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948\") " pod="openstack/kube-state-metrics-0" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.276669 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.348408 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlk4c\" (UniqueName: \"kubernetes.io/projected/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948-kube-api-access-jlk4c\") pod \"kube-state-metrics-0\" (UID: \"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948\") " pod="openstack/kube-state-metrics-0" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.400854 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlk4c\" (UniqueName: \"kubernetes.io/projected/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948-kube-api-access-jlk4c\") pod \"kube-state-metrics-0\" (UID: \"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948\") " pod="openstack/kube-state-metrics-0" Feb 19 14:55:55 crc kubenswrapper[4861]: I0219 14:55:55.582905 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.108734 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.261088 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.299631 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.311599 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-hlzxd" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.328881 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.329784 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.329926 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.340033 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.345732 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.433943 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455350 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455438 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455475 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgp78\" (UniqueName: \"kubernetes.io/projected/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-kube-api-access-qgp78\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455604 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455626 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.455641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557036 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557098 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557158 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgp78\" (UniqueName: \"kubernetes.io/projected/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-kube-api-access-qgp78\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557225 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557251 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557267 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.557303 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.561697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.563964 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.566840 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.567177 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.567624 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.568984 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.587468 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgp78\" (UniqueName: \"kubernetes.io/projected/7d8a2ddc-a471-4e7e-8e9a-fc205b80a904-kube-api-access-qgp78\") pod \"alertmanager-metric-storage-0\" (UID: \"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.613796 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948","Type":"ContainerStarted","Data":"568a64327aa5ed311c32ce0a23a6eea7040d22663198c5520a3ebdad61b4f578"} Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.625611 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f4b4718-0272-409d-8ad4-76114792a8d2","Type":"ContainerStarted","Data":"04eeda67c614722bc211446a53fa72d47296c750532826f897fc16a62466e92a"} Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.650652 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.654175 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.658779 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-69zlt" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.668496 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.669808 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.675697 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.675697 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.675909 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.676027 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.676142 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.703902 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.722867 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvc65\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-kube-api-access-gvc65\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762143 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762172 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762208 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762237 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762284 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762309 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762337 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.762429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864345 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvc65\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-kube-api-access-gvc65\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864500 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864583 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864633 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864662 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.864835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.865868 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.866172 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.866709 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.873036 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.873075 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfbf29fceab89dd2d8e1b1bec20301b1dfaad15f977da63573a2c4f2b05845b3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.873701 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.874527 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.876243 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.876599 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.878838 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.888955 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvc65\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-kube-api-access-gvc65\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:56 crc kubenswrapper[4861]: I0219 14:55:56.984793 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.268158 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 14:55:57 crc kubenswrapper[4861]: W0219 14:55:57.271688 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d8a2ddc_a471_4e7e_8e9a_fc205b80a904.slice/crio-e42f78d0031e1466c7e595b0d0847dc488769694e04803a6143544b6c6828a38 WatchSource:0}: Error finding container e42f78d0031e1466c7e595b0d0847dc488769694e04803a6143544b6c6828a38: Status 404 returned error can't find the container with id e42f78d0031e1466c7e595b0d0847dc488769694e04803a6143544b6c6828a38 Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.275472 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.368119 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.371840 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5a69b964-cce9-4112-86e5-3984e1706034" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.478842 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbnl6\" (UniqueName: \"kubernetes.io/projected/5a69b964-cce9-4112-86e5-3984e1706034-kube-api-access-xbnl6\") pod \"5a69b964-cce9-4112-86e5-3984e1706034\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.478887 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config\") pod \"5a69b964-cce9-4112-86e5-3984e1706034\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.478942 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config-secret\") pod \"5a69b964-cce9-4112-86e5-3984e1706034\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.479192 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-combined-ca-bundle\") pod \"5a69b964-cce9-4112-86e5-3984e1706034\" (UID: \"5a69b964-cce9-4112-86e5-3984e1706034\") " Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.490440 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a69b964-cce9-4112-86e5-3984e1706034-kube-api-access-xbnl6" (OuterVolumeSpecName: "kube-api-access-xbnl6") pod "5a69b964-cce9-4112-86e5-3984e1706034" (UID: "5a69b964-cce9-4112-86e5-3984e1706034"). InnerVolumeSpecName "kube-api-access-xbnl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.512169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5a69b964-cce9-4112-86e5-3984e1706034" (UID: "5a69b964-cce9-4112-86e5-3984e1706034"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.548026 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a69b964-cce9-4112-86e5-3984e1706034" (UID: "5a69b964-cce9-4112-86e5-3984e1706034"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.589143 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5a69b964-cce9-4112-86e5-3984e1706034" (UID: "5a69b964-cce9-4112-86e5-3984e1706034"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.589541 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.599320 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbnl6\" (UniqueName: \"kubernetes.io/projected/5a69b964-cce9-4112-86e5-3984e1706034-kube-api-access-xbnl6\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.599368 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.635363 4861 generic.go:334] "Generic (PLEG): container finished" podID="5a69b964-cce9-4112-86e5-3984e1706034" containerID="c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960" exitCode=137 Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.635435 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.635437 4861 scope.go:117] "RemoveContainer" containerID="c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.637950 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904","Type":"ContainerStarted","Data":"e42f78d0031e1466c7e595b0d0847dc488769694e04803a6143544b6c6828a38"} Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.638068 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5a69b964-cce9-4112-86e5-3984e1706034" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.639833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948","Type":"ContainerStarted","Data":"101faf0da9c14fe0b0136451f31cf4fe42963b560e8cd7a597bf3e0a4a9e97fd"} Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.639968 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.641437 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f4b4718-0272-409d-8ad4-76114792a8d2","Type":"ContainerStarted","Data":"9493b96b97705f9611402a9d0a94a0117d5f63f25ae612f49998944b6b2c49c7"} Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.659908 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.215202473 podStartE2EDuration="2.65988735s" podCreationTimestamp="2026-02-19 14:55:55 +0000 UTC" firstStartedPulling="2026-02-19 14:55:56.448156215 +0000 UTC m=+6371.109259443" lastFinishedPulling="2026-02-19 14:55:56.892841102 +0000 UTC m=+6371.553944320" observedRunningTime="2026-02-19 14:55:57.650852467 +0000 UTC m=+6372.311955685" watchObservedRunningTime="2026-02-19 14:55:57.65988735 +0000 UTC m=+6372.320990578" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.670093 4861 scope.go:117] "RemoveContainer" containerID="c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960" Feb 19 14:55:57 crc kubenswrapper[4861]: E0219 14:55:57.670713 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960\": container with ID starting with c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960 not found: ID does not exist" containerID="c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.670753 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960"} err="failed to get container status \"c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960\": rpc error: code = NotFound desc = could not find container \"c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960\": container with ID starting with c39e3dac87ba6fcb1b70ca634289dd191ac6f2d291a3e187907bfb4fbe752960 not found: ID does not exist" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.680849 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5a69b964-cce9-4112-86e5-3984e1706034" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.682113 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.682098517 podStartE2EDuration="3.682098517s" podCreationTimestamp="2026-02-19 14:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:55:57.670573967 +0000 UTC m=+6372.331677215" watchObservedRunningTime="2026-02-19 14:55:57.682098517 +0000 UTC m=+6372.343201745" Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.700411 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a69b964-cce9-4112-86e5-3984e1706034-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 14:55:57 crc kubenswrapper[4861]: W0219 14:55:57.748629 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a2efc5c_10e0_4c51_aa19_fe0f235bd39e.slice/crio-6ce5ce0dde796e24030d6b9cdbff7837cc5ade484305c1c52be25beb272d02b6 WatchSource:0}: Error finding container 6ce5ce0dde796e24030d6b9cdbff7837cc5ade484305c1c52be25beb272d02b6: Status 404 returned error can't find the container with id 6ce5ce0dde796e24030d6b9cdbff7837cc5ade484305c1c52be25beb272d02b6 Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.750382 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:55:57 crc kubenswrapper[4861]: I0219 14:55:57.990538 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a69b964-cce9-4112-86e5-3984e1706034" path="/var/lib/kubelet/pods/5a69b964-cce9-4112-86e5-3984e1706034/volumes" Feb 19 14:55:58 crc kubenswrapper[4861]: I0219 14:55:58.679504 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerStarted","Data":"6ce5ce0dde796e24030d6b9cdbff7837cc5ade484305c1c52be25beb272d02b6"} Feb 19 14:55:58 crc kubenswrapper[4861]: I0219 14:55:58.765132 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9xnc" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" probeResult="failure" output=< Feb 19 14:55:58 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 14:55:58 crc kubenswrapper[4861]: > Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.054171 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3a12-account-create-update-ttzsw"] Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.079574 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4647b"] Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.114475 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4647b"] Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.135744 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3a12-account-create-update-ttzsw"] Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.748531 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904","Type":"ContainerStarted","Data":"c818472a6370885cb0939684fe8fc29cd9e1e63fd1a2a926e536ff258c488c57"} Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.992823 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e42b279-7a4e-4c32-a97a-092a2812d883" path="/var/lib/kubelet/pods/0e42b279-7a4e-4c32-a97a-092a2812d883/volumes" Feb 19 14:56:03 crc kubenswrapper[4861]: I0219 14:56:03.994700 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2990bb-b686-400b-bddf-5d4bd0e0540d" path="/var/lib/kubelet/pods/ed2990bb-b686-400b-bddf-5d4bd0e0540d/volumes" Feb 19 14:56:04 crc kubenswrapper[4861]: I0219 14:56:04.766104 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerStarted","Data":"1324d54c7c0a51dc8d5e26db3dde225b7e574bcd0854dc5dc2a22efdc823b5f7"} Feb 19 14:56:05 crc kubenswrapper[4861]: I0219 14:56:05.589665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 14:56:07 crc kubenswrapper[4861]: I0219 14:56:07.781144 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:56:07 crc kubenswrapper[4861]: I0219 14:56:07.834614 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:56:08 crc kubenswrapper[4861]: I0219 14:56:08.034168 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9xnc"] Feb 19 14:56:08 crc kubenswrapper[4861]: I0219 14:56:08.234966 4861 scope.go:117] "RemoveContainer" containerID="12d16936ae933e2248a1dc3f68ce378fc5410d009a1d5699d9af48641ecfc069" Feb 19 14:56:08 crc kubenswrapper[4861]: I0219 14:56:08.282764 4861 scope.go:117] "RemoveContainer" containerID="368b35b799fba08bbb76bba5e4694653585ceb0527f589f4d91887b0e2cbfdfe" Feb 19 14:56:08 crc kubenswrapper[4861]: I0219 14:56:08.809534 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9xnc" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" containerID="cri-o://fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291" gracePeriod=2 Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.400199 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.533456 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-utilities\") pod \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.533548 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdxnv\" (UniqueName: \"kubernetes.io/projected/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-kube-api-access-fdxnv\") pod \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.533755 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-catalog-content\") pod \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\" (UID: \"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e\") " Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.534538 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-utilities" (OuterVolumeSpecName: "utilities") pod "338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" (UID: "338b7ee2-b889-4a37-bac9-2b2f7f2ae13e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.543016 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-kube-api-access-fdxnv" (OuterVolumeSpecName: "kube-api-access-fdxnv") pod "338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" (UID: "338b7ee2-b889-4a37-bac9-2b2f7f2ae13e"). InnerVolumeSpecName "kube-api-access-fdxnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.636773 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.636821 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdxnv\" (UniqueName: \"kubernetes.io/projected/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-kube-api-access-fdxnv\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.654685 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" (UID: "338b7ee2-b889-4a37-bac9-2b2f7f2ae13e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.739590 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.827074 4861 generic.go:334] "Generic (PLEG): container finished" podID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerID="fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291" exitCode=0 Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.827139 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerDied","Data":"fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291"} Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.827182 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9xnc" event={"ID":"338b7ee2-b889-4a37-bac9-2b2f7f2ae13e","Type":"ContainerDied","Data":"0b6ced28939a55d50c8e54c62694bdadc8578798d06cd603ba386b9997c0da2f"} Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.827214 4861 scope.go:117] "RemoveContainer" containerID="fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.827397 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9xnc" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.880404 4861 scope.go:117] "RemoveContainer" containerID="f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.886035 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9xnc"] Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.897893 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9xnc"] Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.904164 4861 scope.go:117] "RemoveContainer" containerID="0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.966186 4861 scope.go:117] "RemoveContainer" containerID="fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291" Feb 19 14:56:09 crc kubenswrapper[4861]: E0219 14:56:09.966772 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291\": container with ID starting with fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291 not found: ID does not exist" containerID="fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.966806 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291"} err="failed to get container status \"fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291\": rpc error: code = NotFound desc = could not find container \"fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291\": container with ID starting with fbabd1c056cc11bac2e169a0ea8e5fe6a1874906420560e2323317319649c291 not found: ID does not exist" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.966827 4861 scope.go:117] "RemoveContainer" containerID="f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968" Feb 19 14:56:09 crc kubenswrapper[4861]: E0219 14:56:09.967094 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968\": container with ID starting with f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968 not found: ID does not exist" containerID="f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.967119 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968"} err="failed to get container status \"f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968\": rpc error: code = NotFound desc = could not find container \"f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968\": container with ID starting with f8a8f5b6e9556d4cd2ec16faf4d45e5767bef9ac505df94019982b750c9ea968 not found: ID does not exist" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.967130 4861 scope.go:117] "RemoveContainer" containerID="0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa" Feb 19 14:56:09 crc kubenswrapper[4861]: E0219 14:56:09.967337 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa\": container with ID starting with 0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa not found: ID does not exist" containerID="0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.967383 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa"} err="failed to get container status \"0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa\": rpc error: code = NotFound desc = could not find container \"0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa\": container with ID starting with 0e960c13a0e0ee37daac143be99e03464255633d7594db4bdabfb6b3a95d0dfa not found: ID does not exist" Feb 19 14:56:09 crc kubenswrapper[4861]: I0219 14:56:09.997372 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" path="/var/lib/kubelet/pods/338b7ee2-b889-4a37-bac9-2b2f7f2ae13e/volumes" Feb 19 14:56:10 crc kubenswrapper[4861]: I0219 14:56:10.846195 4861 generic.go:334] "Generic (PLEG): container finished" podID="7d8a2ddc-a471-4e7e-8e9a-fc205b80a904" containerID="c818472a6370885cb0939684fe8fc29cd9e1e63fd1a2a926e536ff258c488c57" exitCode=0 Feb 19 14:56:10 crc kubenswrapper[4861]: I0219 14:56:10.846339 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904","Type":"ContainerDied","Data":"c818472a6370885cb0939684fe8fc29cd9e1e63fd1a2a926e536ff258c488c57"} Feb 19 14:56:11 crc kubenswrapper[4861]: I0219 14:56:11.864140 4861 generic.go:334] "Generic (PLEG): container finished" podID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerID="1324d54c7c0a51dc8d5e26db3dde225b7e574bcd0854dc5dc2a22efdc823b5f7" exitCode=0 Feb 19 14:56:11 crc kubenswrapper[4861]: I0219 14:56:11.864252 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerDied","Data":"1324d54c7c0a51dc8d5e26db3dde225b7e574bcd0854dc5dc2a22efdc823b5f7"} Feb 19 14:56:12 crc kubenswrapper[4861]: I0219 14:56:12.086504 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dx6qp"] Feb 19 14:56:12 crc kubenswrapper[4861]: I0219 14:56:12.094222 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dx6qp"] Feb 19 14:56:13 crc kubenswrapper[4861]: I0219 14:56:13.886212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904","Type":"ContainerStarted","Data":"075151d00c2ff878081f385a0fc0d2b49e4127954482d421875d2b7abfd8858f"} Feb 19 14:56:14 crc kubenswrapper[4861]: I0219 14:56:14.005265 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee47b3de-49e7-4496-9c6f-3ddcebf5e933" path="/var/lib/kubelet/pods/ee47b3de-49e7-4496-9c6f-3ddcebf5e933/volumes" Feb 19 14:56:17 crc kubenswrapper[4861]: I0219 14:56:17.933297 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7d8a2ddc-a471-4e7e-8e9a-fc205b80a904","Type":"ContainerStarted","Data":"eefff5b47efad4d2ffe91b11526fbedc2dd06dd42556f1b7520b527448e35702"} Feb 19 14:56:17 crc kubenswrapper[4861]: I0219 14:56:17.934643 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 19 14:56:17 crc kubenswrapper[4861]: I0219 14:56:17.940328 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 19 14:56:17 crc kubenswrapper[4861]: I0219 14:56:17.957908 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.082600871 podStartE2EDuration="21.957890899s" podCreationTimestamp="2026-02-19 14:55:56 +0000 UTC" firstStartedPulling="2026-02-19 14:55:57.274386653 +0000 UTC m=+6371.935489881" lastFinishedPulling="2026-02-19 14:56:13.149676681 +0000 UTC m=+6387.810779909" observedRunningTime="2026-02-19 14:56:17.955794973 +0000 UTC m=+6392.616898271" watchObservedRunningTime="2026-02-19 14:56:17.957890899 +0000 UTC m=+6392.618994127" Feb 19 14:56:18 crc kubenswrapper[4861]: I0219 14:56:18.946757 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerStarted","Data":"c6298dcb60ed2def8554b99160192ae250f9edf7fc72c47f0cac660c9a2c07c2"} Feb 19 14:56:22 crc kubenswrapper[4861]: I0219 14:56:22.990366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerStarted","Data":"dce52298e16a8807d39a58ec5c8843d542a5d5b8d629c87196e9d0738832bfd6"} Feb 19 14:56:27 crc kubenswrapper[4861]: I0219 14:56:27.041732 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerStarted","Data":"7b659d218b8e6040fc07a75eaad702d4d0d9e988a22e05879d64e748eb562cf5"} Feb 19 14:56:27 crc kubenswrapper[4861]: I0219 14:56:27.071221 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.472122661 podStartE2EDuration="32.071191138s" podCreationTimestamp="2026-02-19 14:55:55 +0000 UTC" firstStartedPulling="2026-02-19 14:55:57.753273989 +0000 UTC m=+6372.414377217" lastFinishedPulling="2026-02-19 14:56:26.352342466 +0000 UTC m=+6401.013445694" observedRunningTime="2026-02-19 14:56:27.067305754 +0000 UTC m=+6401.728408992" watchObservedRunningTime="2026-02-19 14:56:27.071191138 +0000 UTC m=+6401.732294376" Feb 19 14:56:27 crc kubenswrapper[4861]: I0219 14:56:27.276079 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:27 crc kubenswrapper[4861]: I0219 14:56:27.276187 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:27 crc kubenswrapper[4861]: I0219 14:56:27.279969 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:28 crc kubenswrapper[4861]: I0219 14:56:28.054562 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.388352 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.388887 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" containerName="openstackclient" containerID="cri-o://9493b96b97705f9611402a9d0a94a0117d5f63f25ae612f49998944b6b2c49c7" gracePeriod=2 Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.399146 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.423627 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 14:56:30 crc kubenswrapper[4861]: E0219 14:56:30.424232 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="extract-utilities" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.424249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="extract-utilities" Feb 19 14:56:30 crc kubenswrapper[4861]: E0219 14:56:30.424293 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="extract-content" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.424301 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="extract-content" Feb 19 14:56:30 crc kubenswrapper[4861]: E0219 14:56:30.424322 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" containerName="openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.424331 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" containerName="openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: E0219 14:56:30.424351 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.424359 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.424621 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" containerName="openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.424647 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="338b7ee2-b889-4a37-bac9-2b2f7f2ae13e" containerName="registry-server" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.425670 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.434494 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8f4b4718-0272-409d-8ad4-76114792a8d2" podUID="07ea9cf9-b070-4dec-9a77-fd51656875d4" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.435739 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.579344 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07ea9cf9-b070-4dec-9a77-fd51656875d4-openstack-config-secret\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.579460 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ea9cf9-b070-4dec-9a77-fd51656875d4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.579537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07ea9cf9-b070-4dec-9a77-fd51656875d4-openstack-config\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.579607 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnzs\" (UniqueName: \"kubernetes.io/projected/07ea9cf9-b070-4dec-9a77-fd51656875d4-kube-api-access-7tnzs\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.681389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ea9cf9-b070-4dec-9a77-fd51656875d4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.681493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07ea9cf9-b070-4dec-9a77-fd51656875d4-openstack-config\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.681512 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tnzs\" (UniqueName: \"kubernetes.io/projected/07ea9cf9-b070-4dec-9a77-fd51656875d4-kube-api-access-7tnzs\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.681606 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07ea9cf9-b070-4dec-9a77-fd51656875d4-openstack-config-secret\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.683259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07ea9cf9-b070-4dec-9a77-fd51656875d4-openstack-config\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.690077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07ea9cf9-b070-4dec-9a77-fd51656875d4-openstack-config-secret\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.702117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ea9cf9-b070-4dec-9a77-fd51656875d4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.724029 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tnzs\" (UniqueName: \"kubernetes.io/projected/07ea9cf9-b070-4dec-9a77-fd51656875d4-kube-api-access-7tnzs\") pod \"openstackclient\" (UID: \"07ea9cf9-b070-4dec-9a77-fd51656875d4\") " pod="openstack/openstackclient" Feb 19 14:56:30 crc kubenswrapper[4861]: I0219 14:56:30.755934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:56:31 crc kubenswrapper[4861]: I0219 14:56:31.370864 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 14:56:31 crc kubenswrapper[4861]: W0219 14:56:31.373398 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07ea9cf9_b070_4dec_9a77_fd51656875d4.slice/crio-feb81f4d792a8ae99f9be8a1ab7ebfd17a6a60ad628311230f6975fb6cfb89d0 WatchSource:0}: Error finding container feb81f4d792a8ae99f9be8a1ab7ebfd17a6a60ad628311230f6975fb6cfb89d0: Status 404 returned error can't find the container with id feb81f4d792a8ae99f9be8a1ab7ebfd17a6a60ad628311230f6975fb6cfb89d0 Feb 19 14:56:32 crc kubenswrapper[4861]: I0219 14:56:32.068956 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:56:32 crc kubenswrapper[4861]: I0219 14:56:32.069579 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="prometheus" containerID="cri-o://c6298dcb60ed2def8554b99160192ae250f9edf7fc72c47f0cac660c9a2c07c2" gracePeriod=600 Feb 19 14:56:32 crc kubenswrapper[4861]: I0219 14:56:32.069708 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="thanos-sidecar" containerID="cri-o://7b659d218b8e6040fc07a75eaad702d4d0d9e988a22e05879d64e748eb562cf5" gracePeriod=600 Feb 19 14:56:32 crc kubenswrapper[4861]: I0219 14:56:32.069761 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="config-reloader" containerID="cri-o://dce52298e16a8807d39a58ec5c8843d542a5d5b8d629c87196e9d0738832bfd6" gracePeriod=600 Feb 19 14:56:32 crc kubenswrapper[4861]: I0219 14:56:32.110645 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"07ea9cf9-b070-4dec-9a77-fd51656875d4","Type":"ContainerStarted","Data":"feb81f4d792a8ae99f9be8a1ab7ebfd17a6a60ad628311230f6975fb6cfb89d0"} Feb 19 14:56:32 crc kubenswrapper[4861]: I0219 14:56:32.277473 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.152:9090/-/ready\": dial tcp 10.217.1.152:9090: connect: connection refused" Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.120341 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f4b4718-0272-409d-8ad4-76114792a8d2" containerID="9493b96b97705f9611402a9d0a94a0117d5f63f25ae612f49998944b6b2c49c7" exitCode=137 Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.124196 4861 generic.go:334] "Generic (PLEG): container finished" podID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerID="7b659d218b8e6040fc07a75eaad702d4d0d9e988a22e05879d64e748eb562cf5" exitCode=0 Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.124220 4861 generic.go:334] "Generic (PLEG): container finished" podID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerID="dce52298e16a8807d39a58ec5c8843d542a5d5b8d629c87196e9d0738832bfd6" exitCode=0 Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.124230 4861 generic.go:334] "Generic (PLEG): container finished" podID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerID="c6298dcb60ed2def8554b99160192ae250f9edf7fc72c47f0cac660c9a2c07c2" exitCode=0 Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.124239 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerDied","Data":"7b659d218b8e6040fc07a75eaad702d4d0d9e988a22e05879d64e748eb562cf5"} Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.124331 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerDied","Data":"dce52298e16a8807d39a58ec5c8843d542a5d5b8d629c87196e9d0738832bfd6"} Feb 19 14:56:33 crc kubenswrapper[4861]: I0219 14:56:33.124347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerDied","Data":"c6298dcb60ed2def8554b99160192ae250f9edf7fc72c47f0cac660c9a2c07c2"} Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.150859 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"07ea9cf9-b070-4dec-9a77-fd51656875d4","Type":"ContainerStarted","Data":"dd976a5b3f96f092f6c8a3b910b0f65412bb091e633f1260b6436bb6372c3c2e"} Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.171192 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.171173007 podStartE2EDuration="4.171173007s" podCreationTimestamp="2026-02-19 14:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:56:34.169458461 +0000 UTC m=+6408.830561689" watchObservedRunningTime="2026-02-19 14:56:34.171173007 +0000 UTC m=+6408.832276235" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.320992 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.371664 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq47t\" (UniqueName: \"kubernetes.io/projected/8f4b4718-0272-409d-8ad4-76114792a8d2-kube-api-access-xq47t\") pod \"8f4b4718-0272-409d-8ad4-76114792a8d2\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.371746 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config-secret\") pod \"8f4b4718-0272-409d-8ad4-76114792a8d2\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.371888 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-combined-ca-bundle\") pod \"8f4b4718-0272-409d-8ad4-76114792a8d2\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.372009 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config\") pod \"8f4b4718-0272-409d-8ad4-76114792a8d2\" (UID: \"8f4b4718-0272-409d-8ad4-76114792a8d2\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.378032 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4b4718-0272-409d-8ad4-76114792a8d2-kube-api-access-xq47t" (OuterVolumeSpecName: "kube-api-access-xq47t") pod "8f4b4718-0272-409d-8ad4-76114792a8d2" (UID: "8f4b4718-0272-409d-8ad4-76114792a8d2"). InnerVolumeSpecName "kube-api-access-xq47t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.403011 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8f4b4718-0272-409d-8ad4-76114792a8d2" (UID: "8f4b4718-0272-409d-8ad4-76114792a8d2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.413795 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f4b4718-0272-409d-8ad4-76114792a8d2" (UID: "8f4b4718-0272-409d-8ad4-76114792a8d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.443256 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8f4b4718-0272-409d-8ad4-76114792a8d2" (UID: "8f4b4718-0272-409d-8ad4-76114792a8d2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.475009 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.475054 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.475068 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq47t\" (UniqueName: \"kubernetes.io/projected/8f4b4718-0272-409d-8ad4-76114792a8d2-kube-api-access-xq47t\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.475080 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f4b4718-0272-409d-8ad4-76114792a8d2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.524032 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.602987 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-tls-assets\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603108 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-0\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603243 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603266 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-web-config\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603615 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvc65\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-kube-api-access-gvc65\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603648 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-2\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603708 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603807 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.603835 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config-out\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.604141 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-thanos-prometheus-http-client-file\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.604201 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-1\") pod \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\" (UID: \"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e\") " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.604013 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.604714 4861 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.604729 4861 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.606469 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.612722 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.626281 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config" (OuterVolumeSpecName: "config") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.626282 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config-out" (OuterVolumeSpecName: "config-out") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.626337 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.635845 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-kube-api-access-gvc65" (OuterVolumeSpecName: "kube-api-access-gvc65") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "kube-api-access-gvc65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.640562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-web-config" (OuterVolumeSpecName: "web-config") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.662872 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" (UID: "7a2efc5c-10e0-4c51-aa19-fe0f235bd39e"). InnerVolumeSpecName "pvc-f262dad5-5c68-40df-b6e9-2b0168bed699". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706524 4861 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706559 4861 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706572 4861 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706608 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") on node \"crc\" " Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706620 4861 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706628 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvc65\" (UniqueName: \"kubernetes.io/projected/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-kube-api-access-gvc65\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706637 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.706645 4861 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.736520 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.736703 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f262dad5-5c68-40df-b6e9-2b0168bed699" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699") on node "crc" Feb 19 14:56:34 crc kubenswrapper[4861]: I0219 14:56:34.808679 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.164258 4861 scope.go:117] "RemoveContainer" containerID="9493b96b97705f9611402a9d0a94a0117d5f63f25ae612f49998944b6b2c49c7" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.164332 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.173484 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.173509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7a2efc5c-10e0-4c51-aa19-fe0f235bd39e","Type":"ContainerDied","Data":"6ce5ce0dde796e24030d6b9cdbff7837cc5ade484305c1c52be25beb272d02b6"} Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.196494 4861 scope.go:117] "RemoveContainer" containerID="7b659d218b8e6040fc07a75eaad702d4d0d9e988a22e05879d64e748eb562cf5" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.267207 4861 scope.go:117] "RemoveContainer" containerID="dce52298e16a8807d39a58ec5c8843d542a5d5b8d629c87196e9d0738832bfd6" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.287068 4861 scope.go:117] "RemoveContainer" containerID="c6298dcb60ed2def8554b99160192ae250f9edf7fc72c47f0cac660c9a2c07c2" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.309402 4861 scope.go:117] "RemoveContainer" containerID="1324d54c7c0a51dc8d5e26db3dde225b7e574bcd0854dc5dc2a22efdc823b5f7" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.318455 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8f4b4718-0272-409d-8ad4-76114792a8d2" podUID="07ea9cf9-b070-4dec-9a77-fd51656875d4" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.358266 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.372414 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.393514 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:56:35 crc kubenswrapper[4861]: E0219 14:56:35.394038 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="prometheus" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394072 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="prometheus" Feb 19 14:56:35 crc kubenswrapper[4861]: E0219 14:56:35.394107 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="init-config-reloader" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394115 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="init-config-reloader" Feb 19 14:56:35 crc kubenswrapper[4861]: E0219 14:56:35.394139 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="config-reloader" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394146 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="config-reloader" Feb 19 14:56:35 crc kubenswrapper[4861]: E0219 14:56:35.394161 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="thanos-sidecar" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394170 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="thanos-sidecar" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394383 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="prometheus" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394409 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="thanos-sidecar" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.394436 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" containerName="config-reloader" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.396665 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.402859 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.403009 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.403056 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.403264 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-69zlt" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.403358 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.403478 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.403573 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.404748 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.410766 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.418312 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554546 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554622 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554764 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554795 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7xv\" (UniqueName: \"kubernetes.io/projected/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-kube-api-access-nd7xv\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554822 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554853 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554901 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554927 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.554989 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.555033 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.555063 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.555097 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657686 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657751 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657878 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd7xv\" (UniqueName: \"kubernetes.io/projected/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-kube-api-access-nd7xv\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657925 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657947 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.657984 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.658002 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.658058 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.658095 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.658115 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.658144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.658901 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.659602 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.660113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.663361 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.663932 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.664407 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.664794 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.664829 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfbf29fceab89dd2d8e1b1bec20301b1dfaad15f977da63573a2c4f2b05845b3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.667749 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.669898 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.670053 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.670476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.677484 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd7xv\" (UniqueName: \"kubernetes.io/projected/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-kube-api-access-nd7xv\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.678151 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a3757ed-9e0c-4d7a-8701-23da7b477f0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.727518 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f262dad5-5c68-40df-b6e9-2b0168bed699\") pod \"prometheus-metric-storage-0\" (UID: \"3a3757ed-9e0c-4d7a-8701-23da7b477f0f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.986791 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2efc5c-10e0-4c51-aa19-fe0f235bd39e" path="/var/lib/kubelet/pods/7a2efc5c-10e0-4c51-aa19-fe0f235bd39e/volumes" Feb 19 14:56:35 crc kubenswrapper[4861]: I0219 14:56:35.987547 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4b4718-0272-409d-8ad4-76114792a8d2" path="/var/lib/kubelet/pods/8f4b4718-0272-409d-8ad4-76114792a8d2/volumes" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.015203 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.077746 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.094478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.094648 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.097861 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.098048 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.271744 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-run-httpd\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.272026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9k2h\" (UniqueName: \"kubernetes.io/projected/091c1801-446a-4c1e-ae29-93558e3d4a03-kube-api-access-v9k2h\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.272067 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-scripts\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.272092 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.272128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-log-httpd\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.272310 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.272585 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-config-data\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.374627 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-run-httpd\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.374847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9k2h\" (UniqueName: \"kubernetes.io/projected/091c1801-446a-4c1e-ae29-93558e3d4a03-kube-api-access-v9k2h\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.374951 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-scripts\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.375023 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.375109 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-log-httpd\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.375172 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-run-httpd\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.375298 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.375443 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-config-data\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.377490 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-log-httpd\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.380119 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-config-data\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.394237 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.394606 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.396303 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-scripts\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.396993 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9k2h\" (UniqueName: \"kubernetes.io/projected/091c1801-446a-4c1e-ae29-93558e3d4a03-kube-api-access-v9k2h\") pod \"ceilometer-0\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.439862 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.591636 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 14:56:36 crc kubenswrapper[4861]: I0219 14:56:36.790658 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:56:37 crc kubenswrapper[4861]: I0219 14:56:37.196220 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a3757ed-9e0c-4d7a-8701-23da7b477f0f","Type":"ContainerStarted","Data":"12ba0f8bc5de030a1032ad207fdba3ae5276f1c3d80221c0094ffb1d40459f6e"} Feb 19 14:56:37 crc kubenswrapper[4861]: I0219 14:56:37.199942 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerStarted","Data":"6cce6f3acff91e2d1483e74e242f262a92d767bfcccdc30c072f7f499a5a594a"} Feb 19 14:56:38 crc kubenswrapper[4861]: I0219 14:56:38.221924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerStarted","Data":"5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386"} Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.053495 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d2ba-account-create-update-xsv6m"] Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.066361 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-q8b9z"] Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.078331 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-q8b9z"] Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.091999 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d2ba-account-create-update-xsv6m"] Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.242335 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerStarted","Data":"7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578"} Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.991526 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abc723a-e0b6-4eb5-b944-cc167338e911" path="/var/lib/kubelet/pods/1abc723a-e0b6-4eb5-b944-cc167338e911/volumes" Feb 19 14:56:39 crc kubenswrapper[4861]: I0219 14:56:39.994929 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66617a12-1744-44b8-bc56-702ecc53122c" path="/var/lib/kubelet/pods/66617a12-1744-44b8-bc56-702ecc53122c/volumes" Feb 19 14:56:40 crc kubenswrapper[4861]: I0219 14:56:40.254999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerStarted","Data":"28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76"} Feb 19 14:56:41 crc kubenswrapper[4861]: I0219 14:56:41.264981 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a3757ed-9e0c-4d7a-8701-23da7b477f0f","Type":"ContainerStarted","Data":"a8792143d9ef4bfcaf409bf251bc74ad32df1e05fa35701a00289da95e531bc0"} Feb 19 14:56:42 crc kubenswrapper[4861]: I0219 14:56:42.280918 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerStarted","Data":"3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f"} Feb 19 14:56:42 crc kubenswrapper[4861]: I0219 14:56:42.311896 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7153308040000002 podStartE2EDuration="6.311882787s" podCreationTimestamp="2026-02-19 14:56:36 +0000 UTC" firstStartedPulling="2026-02-19 14:56:36.823578216 +0000 UTC m=+6411.484681444" lastFinishedPulling="2026-02-19 14:56:41.420130179 +0000 UTC m=+6416.081233427" observedRunningTime="2026-02-19 14:56:42.310992333 +0000 UTC m=+6416.972095561" watchObservedRunningTime="2026-02-19 14:56:42.311882787 +0000 UTC m=+6416.972986015" Feb 19 14:56:43 crc kubenswrapper[4861]: I0219 14:56:43.291839 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 14:56:45 crc kubenswrapper[4861]: I0219 14:56:45.026739 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hjwnf"] Feb 19 14:56:45 crc kubenswrapper[4861]: I0219 14:56:45.035512 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hjwnf"] Feb 19 14:56:45 crc kubenswrapper[4861]: I0219 14:56:45.976357 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-cbxqx"] Feb 19 14:56:45 crc kubenswrapper[4861]: I0219 14:56:45.978389 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.023719 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d3f883-2c40-4600-9836-d134daae7daf" path="/var/lib/kubelet/pods/56d3f883-2c40-4600-9836-d134daae7daf/volumes" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.043957 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-cbxqx"] Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.102614 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86eead69-a24a-41d4-bbf2-289a6e72989d-operator-scripts\") pod \"aodh-db-create-cbxqx\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.102963 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bphht\" (UniqueName: \"kubernetes.io/projected/86eead69-a24a-41d4-bbf2-289a6e72989d-kube-api-access-bphht\") pod \"aodh-db-create-cbxqx\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.106525 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-a68c-account-create-update-vd6td"] Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.107838 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.110663 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.120399 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a68c-account-create-update-vd6td"] Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.204694 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltb5k\" (UniqueName: \"kubernetes.io/projected/e9e8c69d-e2be-44a2-a761-4074f11733e8-kube-api-access-ltb5k\") pod \"aodh-a68c-account-create-update-vd6td\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.204766 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86eead69-a24a-41d4-bbf2-289a6e72989d-operator-scripts\") pod \"aodh-db-create-cbxqx\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.204824 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bphht\" (UniqueName: \"kubernetes.io/projected/86eead69-a24a-41d4-bbf2-289a6e72989d-kube-api-access-bphht\") pod \"aodh-db-create-cbxqx\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.204933 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e8c69d-e2be-44a2-a761-4074f11733e8-operator-scripts\") pod \"aodh-a68c-account-create-update-vd6td\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.205517 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86eead69-a24a-41d4-bbf2-289a6e72989d-operator-scripts\") pod \"aodh-db-create-cbxqx\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.222844 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bphht\" (UniqueName: \"kubernetes.io/projected/86eead69-a24a-41d4-bbf2-289a6e72989d-kube-api-access-bphht\") pod \"aodh-db-create-cbxqx\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.306667 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e8c69d-e2be-44a2-a761-4074f11733e8-operator-scripts\") pod \"aodh-a68c-account-create-update-vd6td\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.306796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltb5k\" (UniqueName: \"kubernetes.io/projected/e9e8c69d-e2be-44a2-a761-4074f11733e8-kube-api-access-ltb5k\") pod \"aodh-a68c-account-create-update-vd6td\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.307368 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e8c69d-e2be-44a2-a761-4074f11733e8-operator-scripts\") pod \"aodh-a68c-account-create-update-vd6td\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.326722 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltb5k\" (UniqueName: \"kubernetes.io/projected/e9e8c69d-e2be-44a2-a761-4074f11733e8-kube-api-access-ltb5k\") pod \"aodh-a68c-account-create-update-vd6td\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.327267 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.428617 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:46 crc kubenswrapper[4861]: W0219 14:56:46.908027 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86eead69_a24a_41d4_bbf2_289a6e72989d.slice/crio-a8fc6250059438e81b4044c9567e351e4aa2a5c81b0bdf9fe4b9f5c191ffbad0 WatchSource:0}: Error finding container a8fc6250059438e81b4044c9567e351e4aa2a5c81b0bdf9fe4b9f5c191ffbad0: Status 404 returned error can't find the container with id a8fc6250059438e81b4044c9567e351e4aa2a5c81b0bdf9fe4b9f5c191ffbad0 Feb 19 14:56:46 crc kubenswrapper[4861]: I0219 14:56:46.914168 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-cbxqx"] Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.014825 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a68c-account-create-update-vd6td"] Feb 19 14:56:47 crc kubenswrapper[4861]: W0219 14:56:47.015801 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9e8c69d_e2be_44a2_a761_4074f11733e8.slice/crio-7d7d3987947209bf37e79114dfd97acb225c81f532aeaf810f98e6b203885a94 WatchSource:0}: Error finding container 7d7d3987947209bf37e79114dfd97acb225c81f532aeaf810f98e6b203885a94: Status 404 returned error can't find the container with id 7d7d3987947209bf37e79114dfd97acb225c81f532aeaf810f98e6b203885a94 Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.331255 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cbxqx" event={"ID":"86eead69-a24a-41d4-bbf2-289a6e72989d","Type":"ContainerStarted","Data":"67ff0773d72808d83c4918cee149618c7b1d982c2df2d5bac909940cc15cae4d"} Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.331307 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cbxqx" event={"ID":"86eead69-a24a-41d4-bbf2-289a6e72989d","Type":"ContainerStarted","Data":"a8fc6250059438e81b4044c9567e351e4aa2a5c81b0bdf9fe4b9f5c191ffbad0"} Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.335997 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a68c-account-create-update-vd6td" event={"ID":"e9e8c69d-e2be-44a2-a761-4074f11733e8","Type":"ContainerStarted","Data":"4b187caedf1aa9ba364cc2d66744b4ca466774cd1efc57147800761f31bdb4b9"} Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.336042 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a68c-account-create-update-vd6td" event={"ID":"e9e8c69d-e2be-44a2-a761-4074f11733e8","Type":"ContainerStarted","Data":"7d7d3987947209bf37e79114dfd97acb225c81f532aeaf810f98e6b203885a94"} Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.338311 4861 generic.go:334] "Generic (PLEG): container finished" podID="3a3757ed-9e0c-4d7a-8701-23da7b477f0f" containerID="a8792143d9ef4bfcaf409bf251bc74ad32df1e05fa35701a00289da95e531bc0" exitCode=0 Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.338353 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a3757ed-9e0c-4d7a-8701-23da7b477f0f","Type":"ContainerDied","Data":"a8792143d9ef4bfcaf409bf251bc74ad32df1e05fa35701a00289da95e531bc0"} Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.356657 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-cbxqx" podStartSLOduration=2.356639791 podStartE2EDuration="2.356639791s" podCreationTimestamp="2026-02-19 14:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:56:47.344842075 +0000 UTC m=+6422.005945303" watchObservedRunningTime="2026-02-19 14:56:47.356639791 +0000 UTC m=+6422.017743019" Feb 19 14:56:47 crc kubenswrapper[4861]: I0219 14:56:47.434971 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-a68c-account-create-update-vd6td" podStartSLOduration=1.4349454750000001 podStartE2EDuration="1.434945475s" podCreationTimestamp="2026-02-19 14:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:56:47.425924123 +0000 UTC m=+6422.087027371" watchObservedRunningTime="2026-02-19 14:56:47.434945475 +0000 UTC m=+6422.096048693" Feb 19 14:56:48 crc kubenswrapper[4861]: I0219 14:56:48.347367 4861 generic.go:334] "Generic (PLEG): container finished" podID="86eead69-a24a-41d4-bbf2-289a6e72989d" containerID="67ff0773d72808d83c4918cee149618c7b1d982c2df2d5bac909940cc15cae4d" exitCode=0 Feb 19 14:56:48 crc kubenswrapper[4861]: I0219 14:56:48.347497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cbxqx" event={"ID":"86eead69-a24a-41d4-bbf2-289a6e72989d","Type":"ContainerDied","Data":"67ff0773d72808d83c4918cee149618c7b1d982c2df2d5bac909940cc15cae4d"} Feb 19 14:56:48 crc kubenswrapper[4861]: I0219 14:56:48.349460 4861 generic.go:334] "Generic (PLEG): container finished" podID="e9e8c69d-e2be-44a2-a761-4074f11733e8" containerID="4b187caedf1aa9ba364cc2d66744b4ca466774cd1efc57147800761f31bdb4b9" exitCode=0 Feb 19 14:56:48 crc kubenswrapper[4861]: I0219 14:56:48.349497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a68c-account-create-update-vd6td" event={"ID":"e9e8c69d-e2be-44a2-a761-4074f11733e8","Type":"ContainerDied","Data":"4b187caedf1aa9ba364cc2d66744b4ca466774cd1efc57147800761f31bdb4b9"} Feb 19 14:56:48 crc kubenswrapper[4861]: I0219 14:56:48.351817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a3757ed-9e0c-4d7a-8701-23da7b477f0f","Type":"ContainerStarted","Data":"93ec918ca3d51fd9af2b118e6621f9c962ae597792038e2676a4a2e158e93c18"} Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.934543 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.941347 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.997302 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltb5k\" (UniqueName: \"kubernetes.io/projected/e9e8c69d-e2be-44a2-a761-4074f11733e8-kube-api-access-ltb5k\") pod \"e9e8c69d-e2be-44a2-a761-4074f11733e8\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.997472 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bphht\" (UniqueName: \"kubernetes.io/projected/86eead69-a24a-41d4-bbf2-289a6e72989d-kube-api-access-bphht\") pod \"86eead69-a24a-41d4-bbf2-289a6e72989d\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.997526 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e8c69d-e2be-44a2-a761-4074f11733e8-operator-scripts\") pod \"e9e8c69d-e2be-44a2-a761-4074f11733e8\" (UID: \"e9e8c69d-e2be-44a2-a761-4074f11733e8\") " Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.997737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86eead69-a24a-41d4-bbf2-289a6e72989d-operator-scripts\") pod \"86eead69-a24a-41d4-bbf2-289a6e72989d\" (UID: \"86eead69-a24a-41d4-bbf2-289a6e72989d\") " Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.998625 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e8c69d-e2be-44a2-a761-4074f11733e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9e8c69d-e2be-44a2-a761-4074f11733e8" (UID: "e9e8c69d-e2be-44a2-a761-4074f11733e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:56:49 crc kubenswrapper[4861]: I0219 14:56:49.999561 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e8c69d-e2be-44a2-a761-4074f11733e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.000204 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86eead69-a24a-41d4-bbf2-289a6e72989d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86eead69-a24a-41d4-bbf2-289a6e72989d" (UID: "86eead69-a24a-41d4-bbf2-289a6e72989d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.016209 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86eead69-a24a-41d4-bbf2-289a6e72989d-kube-api-access-bphht" (OuterVolumeSpecName: "kube-api-access-bphht") pod "86eead69-a24a-41d4-bbf2-289a6e72989d" (UID: "86eead69-a24a-41d4-bbf2-289a6e72989d"). InnerVolumeSpecName "kube-api-access-bphht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.025589 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e8c69d-e2be-44a2-a761-4074f11733e8-kube-api-access-ltb5k" (OuterVolumeSpecName: "kube-api-access-ltb5k") pod "e9e8c69d-e2be-44a2-a761-4074f11733e8" (UID: "e9e8c69d-e2be-44a2-a761-4074f11733e8"). InnerVolumeSpecName "kube-api-access-ltb5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.101584 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bphht\" (UniqueName: \"kubernetes.io/projected/86eead69-a24a-41d4-bbf2-289a6e72989d-kube-api-access-bphht\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.101635 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86eead69-a24a-41d4-bbf2-289a6e72989d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.101645 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltb5k\" (UniqueName: \"kubernetes.io/projected/e9e8c69d-e2be-44a2-a761-4074f11733e8-kube-api-access-ltb5k\") on node \"crc\" DevicePath \"\"" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.379133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cbxqx" event={"ID":"86eead69-a24a-41d4-bbf2-289a6e72989d","Type":"ContainerDied","Data":"a8fc6250059438e81b4044c9567e351e4aa2a5c81b0bdf9fe4b9f5c191ffbad0"} Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.379604 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8fc6250059438e81b4044c9567e351e4aa2a5c81b0bdf9fe4b9f5c191ffbad0" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.379188 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cbxqx" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.381916 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a68c-account-create-update-vd6td" Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.386691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a68c-account-create-update-vd6td" event={"ID":"e9e8c69d-e2be-44a2-a761-4074f11733e8","Type":"ContainerDied","Data":"7d7d3987947209bf37e79114dfd97acb225c81f532aeaf810f98e6b203885a94"} Feb 19 14:56:50 crc kubenswrapper[4861]: I0219 14:56:50.386797 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7d3987947209bf37e79114dfd97acb225c81f532aeaf810f98e6b203885a94" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.407072 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pp8gl"] Feb 19 14:56:51 crc kubenswrapper[4861]: E0219 14:56:51.407524 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86eead69-a24a-41d4-bbf2-289a6e72989d" containerName="mariadb-database-create" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.407537 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86eead69-a24a-41d4-bbf2-289a6e72989d" containerName="mariadb-database-create" Feb 19 14:56:51 crc kubenswrapper[4861]: E0219 14:56:51.407570 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e8c69d-e2be-44a2-a761-4074f11733e8" containerName="mariadb-account-create-update" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.407576 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e8c69d-e2be-44a2-a761-4074f11733e8" containerName="mariadb-account-create-update" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.407754 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="86eead69-a24a-41d4-bbf2-289a6e72989d" containerName="mariadb-database-create" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.407774 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e8c69d-e2be-44a2-a761-4074f11733e8" containerName="mariadb-account-create-update" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.408595 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.412775 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.412857 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6d297" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.412933 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.412960 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.425535 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pp8gl"] Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.434076 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-scripts\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.434271 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-config-data\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.434857 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmt8\" (UniqueName: \"kubernetes.io/projected/8f25c96c-cf2f-4173-b839-e33d5412655a-kube-api-access-vcmt8\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.434918 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-combined-ca-bundle\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.537583 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmt8\" (UniqueName: \"kubernetes.io/projected/8f25c96c-cf2f-4173-b839-e33d5412655a-kube-api-access-vcmt8\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.537639 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-combined-ca-bundle\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.537740 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-scripts\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.537785 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-config-data\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.546246 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-combined-ca-bundle\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.546958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-config-data\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.555859 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-scripts\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.563980 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmt8\" (UniqueName: \"kubernetes.io/projected/8f25c96c-cf2f-4173-b839-e33d5412655a-kube-api-access-vcmt8\") pod \"aodh-db-sync-pp8gl\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:51 crc kubenswrapper[4861]: I0219 14:56:51.728911 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:56:52 crc kubenswrapper[4861]: W0219 14:56:52.283355 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f25c96c_cf2f_4173_b839_e33d5412655a.slice/crio-2daec52dd78a47af2ca4f82fbe03366dca21b0e0e549de2bd9a41196096149c0 WatchSource:0}: Error finding container 2daec52dd78a47af2ca4f82fbe03366dca21b0e0e549de2bd9a41196096149c0: Status 404 returned error can't find the container with id 2daec52dd78a47af2ca4f82fbe03366dca21b0e0e549de2bd9a41196096149c0 Feb 19 14:56:52 crc kubenswrapper[4861]: I0219 14:56:52.302647 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pp8gl"] Feb 19 14:56:52 crc kubenswrapper[4861]: I0219 14:56:52.408541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pp8gl" event={"ID":"8f25c96c-cf2f-4173-b839-e33d5412655a","Type":"ContainerStarted","Data":"2daec52dd78a47af2ca4f82fbe03366dca21b0e0e549de2bd9a41196096149c0"} Feb 19 14:56:52 crc kubenswrapper[4861]: I0219 14:56:52.412375 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a3757ed-9e0c-4d7a-8701-23da7b477f0f","Type":"ContainerStarted","Data":"190f3a5285aebb7bc426b86164722f7c0f5071c9a2bea9924b5451abe6da3756"} Feb 19 14:56:53 crc kubenswrapper[4861]: I0219 14:56:53.426263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a3757ed-9e0c-4d7a-8701-23da7b477f0f","Type":"ContainerStarted","Data":"ab47a56ffdac688911e0b24cdd83024899b86612491ab791318d1422a7c5646f"} Feb 19 14:56:53 crc kubenswrapper[4861]: I0219 14:56:53.463329 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.463311614 podStartE2EDuration="18.463311614s" podCreationTimestamp="2026-02-19 14:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:56:53.453037508 +0000 UTC m=+6428.114140736" watchObservedRunningTime="2026-02-19 14:56:53.463311614 +0000 UTC m=+6428.124414842" Feb 19 14:56:56 crc kubenswrapper[4861]: I0219 14:56:56.016609 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 14:56:57 crc kubenswrapper[4861]: I0219 14:56:57.476842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pp8gl" event={"ID":"8f25c96c-cf2f-4173-b839-e33d5412655a","Type":"ContainerStarted","Data":"fb12d152bfb7fb3c55b381de183919436669e54a010938ad02aa4132063ec029"} Feb 19 14:56:57 crc kubenswrapper[4861]: I0219 14:56:57.497877 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pp8gl" podStartSLOduration=2.505244461 podStartE2EDuration="6.497845176s" podCreationTimestamp="2026-02-19 14:56:51 +0000 UTC" firstStartedPulling="2026-02-19 14:56:52.287441103 +0000 UTC m=+6426.948544331" lastFinishedPulling="2026-02-19 14:56:56.280041818 +0000 UTC m=+6430.941145046" observedRunningTime="2026-02-19 14:56:57.491495516 +0000 UTC m=+6432.152598764" watchObservedRunningTime="2026-02-19 14:56:57.497845176 +0000 UTC m=+6432.158948434" Feb 19 14:56:59 crc kubenswrapper[4861]: I0219 14:56:59.507742 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f25c96c-cf2f-4173-b839-e33d5412655a" containerID="fb12d152bfb7fb3c55b381de183919436669e54a010938ad02aa4132063ec029" exitCode=0 Feb 19 14:56:59 crc kubenswrapper[4861]: I0219 14:56:59.507879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pp8gl" event={"ID":"8f25c96c-cf2f-4173-b839-e33d5412655a","Type":"ContainerDied","Data":"fb12d152bfb7fb3c55b381de183919436669e54a010938ad02aa4132063ec029"} Feb 19 14:57:00 crc kubenswrapper[4861]: I0219 14:57:00.963605 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.083222 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-scripts\") pod \"8f25c96c-cf2f-4173-b839-e33d5412655a\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.083259 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-config-data\") pod \"8f25c96c-cf2f-4173-b839-e33d5412655a\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.083308 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmt8\" (UniqueName: \"kubernetes.io/projected/8f25c96c-cf2f-4173-b839-e33d5412655a-kube-api-access-vcmt8\") pod \"8f25c96c-cf2f-4173-b839-e33d5412655a\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.083670 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-combined-ca-bundle\") pod \"8f25c96c-cf2f-4173-b839-e33d5412655a\" (UID: \"8f25c96c-cf2f-4173-b839-e33d5412655a\") " Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.089525 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f25c96c-cf2f-4173-b839-e33d5412655a-kube-api-access-vcmt8" (OuterVolumeSpecName: "kube-api-access-vcmt8") pod "8f25c96c-cf2f-4173-b839-e33d5412655a" (UID: "8f25c96c-cf2f-4173-b839-e33d5412655a"). InnerVolumeSpecName "kube-api-access-vcmt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.089932 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-scripts" (OuterVolumeSpecName: "scripts") pod "8f25c96c-cf2f-4173-b839-e33d5412655a" (UID: "8f25c96c-cf2f-4173-b839-e33d5412655a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.114806 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f25c96c-cf2f-4173-b839-e33d5412655a" (UID: "8f25c96c-cf2f-4173-b839-e33d5412655a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.138840 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-config-data" (OuterVolumeSpecName: "config-data") pod "8f25c96c-cf2f-4173-b839-e33d5412655a" (UID: "8f25c96c-cf2f-4173-b839-e33d5412655a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.186180 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.186223 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.186232 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f25c96c-cf2f-4173-b839-e33d5412655a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.186243 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmt8\" (UniqueName: \"kubernetes.io/projected/8f25c96c-cf2f-4173-b839-e33d5412655a-kube-api-access-vcmt8\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.538321 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pp8gl" event={"ID":"8f25c96c-cf2f-4173-b839-e33d5412655a","Type":"ContainerDied","Data":"2daec52dd78a47af2ca4f82fbe03366dca21b0e0e549de2bd9a41196096149c0"} Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.538757 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2daec52dd78a47af2ca4f82fbe03366dca21b0e0e549de2bd9a41196096149c0" Feb 19 14:57:01 crc kubenswrapper[4861]: I0219 14:57:01.538487 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pp8gl" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.032626 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.040438 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.076318 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:06 crc kubenswrapper[4861]: E0219 14:57:06.077005 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f25c96c-cf2f-4173-b839-e33d5412655a" containerName="aodh-db-sync" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.077024 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f25c96c-cf2f-4173-b839-e33d5412655a" containerName="aodh-db-sync" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.077313 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f25c96c-cf2f-4173-b839-e33d5412655a" containerName="aodh-db-sync" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.082344 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.086659 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.086857 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6d297" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.086993 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.100710 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.115650 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-config-data\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.115756 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.115870 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92jv\" (UniqueName: \"kubernetes.io/projected/1d8fd4f3-eb70-4979-9dc4-63796e85f703-kube-api-access-h92jv\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.115959 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-scripts\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.219279 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92jv\" (UniqueName: \"kubernetes.io/projected/1d8fd4f3-eb70-4979-9dc4-63796e85f703-kube-api-access-h92jv\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.219383 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-scripts\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.219544 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-config-data\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.219595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.226384 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.238017 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-scripts\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.239008 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-config-data\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.250034 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92jv\" (UniqueName: \"kubernetes.io/projected/1d8fd4f3-eb70-4979-9dc4-63796e85f703-kube-api-access-h92jv\") pod \"aodh-0\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.412637 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.452842 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.616269 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 14:57:06 crc kubenswrapper[4861]: I0219 14:57:06.974194 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:07 crc kubenswrapper[4861]: I0219 14:57:07.622735 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerStarted","Data":"bc9593c2e27d08cbc74c91c27683ba16153d7653f95c3a8d69e28a575edbf87a"} Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.293608 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.296969 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-central-agent" containerID="cri-o://5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386" gracePeriod=30 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.297112 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="proxy-httpd" containerID="cri-o://3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f" gracePeriod=30 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.297158 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="sg-core" containerID="cri-o://28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76" gracePeriod=30 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.297191 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-notification-agent" containerID="cri-o://7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578" gracePeriod=30 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.453269 4861 scope.go:117] "RemoveContainer" containerID="56717367569dd7d1cc9904b2239ad2c86719d2f8c5b6d7af1885658fa362a533" Feb 19 14:57:08 crc kubenswrapper[4861]: E0219 14:57:08.457571 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod091c1801_446a_4c1e_ae29_93558e3d4a03.slice/crio-28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod091c1801_446a_4c1e_ae29_93558e3d4a03.slice/crio-conmon-28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod091c1801_446a_4c1e_ae29_93558e3d4a03.slice/crio-3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f.scope\": RecentStats: unable to find data in memory cache]" Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.541379 4861 scope.go:117] "RemoveContainer" containerID="f0bc0f24589325250008866d3017a725b3dc3a72ac3f243e57485b921fa15fa8" Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.575016 4861 scope.go:117] "RemoveContainer" containerID="6c0806e58528f39d4fc64c9309b66d305bdefb0834913bc114b50d599df1a02c" Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.646255 4861 generic.go:334] "Generic (PLEG): container finished" podID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerID="3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f" exitCode=0 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.646303 4861 generic.go:334] "Generic (PLEG): container finished" podID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerID="28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76" exitCode=2 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.646315 4861 generic.go:334] "Generic (PLEG): container finished" podID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerID="5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386" exitCode=0 Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.646310 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerDied","Data":"3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f"} Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.646350 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerDied","Data":"28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76"} Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.646362 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerDied","Data":"5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386"} Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.648939 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerStarted","Data":"9bd298e78bc03c229eb83e81ab246a4146a6a0e2cf1802aa1d7b313e5b265818"} Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.680023 4861 scope.go:117] "RemoveContainer" containerID="523df994df4f0738f489e91b8585f738e3e6a1dffa3d7d9e2468c7dcb0679ae8" Feb 19 14:57:08 crc kubenswrapper[4861]: I0219 14:57:08.754179 4861 scope.go:117] "RemoveContainer" containerID="e49a31cf95250e7adf69edcfabe0d99a7b41230eb3f13934ba4aeb0b17aee2e2" Feb 19 14:57:09 crc kubenswrapper[4861]: I0219 14:57:09.076810 4861 scope.go:117] "RemoveContainer" containerID="ecc8f3cd66a543efb8916835b9e8b235c75a45251e63a7204c3429b31d33bd00" Feb 19 14:57:09 crc kubenswrapper[4861]: I0219 14:57:09.659245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerStarted","Data":"a861383e343b8742fac416cba5edd3d5233ddb80244d8fbdb069184f9441cce4"} Feb 19 14:57:09 crc kubenswrapper[4861]: I0219 14:57:09.883834 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.241799 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414513 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-sg-core-conf-yaml\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414581 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-config-data\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414608 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9k2h\" (UniqueName: \"kubernetes.io/projected/091c1801-446a-4c1e-ae29-93558e3d4a03-kube-api-access-v9k2h\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414642 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-run-httpd\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414663 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-combined-ca-bundle\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414743 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-scripts\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.414797 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-log-httpd\") pod \"091c1801-446a-4c1e-ae29-93558e3d4a03\" (UID: \"091c1801-446a-4c1e-ae29-93558e3d4a03\") " Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.415161 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.415321 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.422658 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/091c1801-446a-4c1e-ae29-93558e3d4a03-kube-api-access-v9k2h" (OuterVolumeSpecName: "kube-api-access-v9k2h") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "kube-api-access-v9k2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.431775 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-scripts" (OuterVolumeSpecName: "scripts") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.458745 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.516663 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.516692 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9k2h\" (UniqueName: \"kubernetes.io/projected/091c1801-446a-4c1e-ae29-93558e3d4a03-kube-api-access-v9k2h\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.516706 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.516717 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.516725 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/091c1801-446a-4c1e-ae29-93558e3d4a03-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.532201 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.549283 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-config-data" (OuterVolumeSpecName: "config-data") pod "091c1801-446a-4c1e-ae29-93558e3d4a03" (UID: "091c1801-446a-4c1e-ae29-93558e3d4a03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.626215 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.626251 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091c1801-446a-4c1e-ae29-93558e3d4a03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.686988 4861 generic.go:334] "Generic (PLEG): container finished" podID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerID="7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578" exitCode=0 Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.687029 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerDied","Data":"7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578"} Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.687058 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"091c1801-446a-4c1e-ae29-93558e3d4a03","Type":"ContainerDied","Data":"6cce6f3acff91e2d1483e74e242f262a92d767bfcccdc30c072f7f499a5a594a"} Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.687171 4861 scope.go:117] "RemoveContainer" containerID="3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.687326 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.732190 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.754171 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.768869 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:10 crc kubenswrapper[4861]: E0219 14:57:10.769363 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="sg-core" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769379 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="sg-core" Feb 19 14:57:10 crc kubenswrapper[4861]: E0219 14:57:10.769397 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="proxy-httpd" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769404 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="proxy-httpd" Feb 19 14:57:10 crc kubenswrapper[4861]: E0219 14:57:10.769437 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-notification-agent" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769445 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-notification-agent" Feb 19 14:57:10 crc kubenswrapper[4861]: E0219 14:57:10.769470 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-central-agent" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769477 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-central-agent" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769670 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-notification-agent" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769683 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="proxy-httpd" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769701 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="ceilometer-central-agent" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.769711 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" containerName="sg-core" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.771560 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.776800 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.776980 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.778201 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.859491 4861 scope.go:117] "RemoveContainer" containerID="28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.876131 4861 scope.go:117] "RemoveContainer" containerID="7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.920207 4861 scope.go:117] "RemoveContainer" containerID="5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932387 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932713 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-scripts\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932823 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-run-httpd\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932916 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-config-data\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932950 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-log-httpd\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:10 crc kubenswrapper[4861]: I0219 14:57:10.932971 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rzp\" (UniqueName: \"kubernetes.io/projected/5db1f221-04d1-4f9a-81c4-1865068b0394-kube-api-access-72rzp\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.032833 4861 scope.go:117] "RemoveContainer" containerID="3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f" Feb 19 14:57:11 crc kubenswrapper[4861]: E0219 14:57:11.033574 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f\": container with ID starting with 3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f not found: ID does not exist" containerID="3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.033617 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f"} err="failed to get container status \"3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f\": rpc error: code = NotFound desc = could not find container \"3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f\": container with ID starting with 3a546d253a678cc266b3176a07f879602c337f79a13f14188d91c6330aed623f not found: ID does not exist" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.033643 4861 scope.go:117] "RemoveContainer" containerID="28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76" Feb 19 14:57:11 crc kubenswrapper[4861]: E0219 14:57:11.034178 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76\": container with ID starting with 28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76 not found: ID does not exist" containerID="28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.034219 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76"} err="failed to get container status \"28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76\": rpc error: code = NotFound desc = could not find container \"28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76\": container with ID starting with 28acbac5d296699b2fc6299f09b08cbe87718774db1e617a864966daade59e76 not found: ID does not exist" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.034245 4861 scope.go:117] "RemoveContainer" containerID="7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.034317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rzp\" (UniqueName: \"kubernetes.io/projected/5db1f221-04d1-4f9a-81c4-1865068b0394-kube-api-access-72rzp\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.034724 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.034863 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-scripts\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.035097 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-run-httpd\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.035167 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.035352 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-config-data\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.035392 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-log-httpd\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.036015 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-log-httpd\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.037500 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-run-httpd\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: E0219 14:57:11.037630 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578\": container with ID starting with 7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578 not found: ID does not exist" containerID="7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.037662 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578"} err="failed to get container status \"7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578\": rpc error: code = NotFound desc = could not find container \"7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578\": container with ID starting with 7bece126686e87e9f0aebb7f55e7809a327de392cf8a7f29d06660bb949e2578 not found: ID does not exist" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.037693 4861 scope.go:117] "RemoveContainer" containerID="5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386" Feb 19 14:57:11 crc kubenswrapper[4861]: E0219 14:57:11.038838 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386\": container with ID starting with 5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386 not found: ID does not exist" containerID="5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.038916 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386"} err="failed to get container status \"5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386\": rpc error: code = NotFound desc = could not find container \"5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386\": container with ID starting with 5437fa96f46a999c8bb6391668355b908b887b068e7901c274744b10dc968386 not found: ID does not exist" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.040109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-scripts\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.040402 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.040548 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.041299 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-config-data\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.059001 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rzp\" (UniqueName: \"kubernetes.io/projected/5db1f221-04d1-4f9a-81c4-1865068b0394-kube-api-access-72rzp\") pod \"ceilometer-0\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.088346 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.597380 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:11 crc kubenswrapper[4861]: W0219 14:57:11.601596 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db1f221_04d1_4f9a_81c4_1865068b0394.slice/crio-ac0e818f8364329ca0b3dadd3df74902ceb4f5eb298d2814f8c7b52009a1b10c WatchSource:0}: Error finding container ac0e818f8364329ca0b3dadd3df74902ceb4f5eb298d2814f8c7b52009a1b10c: Status 404 returned error can't find the container with id ac0e818f8364329ca0b3dadd3df74902ceb4f5eb298d2814f8c7b52009a1b10c Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.709460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerStarted","Data":"ac0e818f8364329ca0b3dadd3df74902ceb4f5eb298d2814f8c7b52009a1b10c"} Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.711754 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerStarted","Data":"7e6f8cf75a10036900ef8e86f3b00305301f6e33796b7765571f13a51755c47e"} Feb 19 14:57:11 crc kubenswrapper[4861]: I0219 14:57:11.992972 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="091c1801-446a-4c1e-ae29-93558e3d4a03" path="/var/lib/kubelet/pods/091c1801-446a-4c1e-ae29-93558e3d4a03/volumes" Feb 19 14:57:12 crc kubenswrapper[4861]: I0219 14:57:12.330450 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:12 crc kubenswrapper[4861]: I0219 14:57:12.730313 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerStarted","Data":"00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3"} Feb 19 14:57:13 crc kubenswrapper[4861]: I0219 14:57:13.752033 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerStarted","Data":"f315601e284bdf42aa51a4506ac0e7901431744a102d0e398f4cd063833bd323"} Feb 19 14:57:13 crc kubenswrapper[4861]: I0219 14:57:13.752645 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-api" containerID="cri-o://9bd298e78bc03c229eb83e81ab246a4146a6a0e2cf1802aa1d7b313e5b265818" gracePeriod=30 Feb 19 14:57:13 crc kubenswrapper[4861]: I0219 14:57:13.752725 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-listener" containerID="cri-o://f315601e284bdf42aa51a4506ac0e7901431744a102d0e398f4cd063833bd323" gracePeriod=30 Feb 19 14:57:13 crc kubenswrapper[4861]: I0219 14:57:13.753117 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-notifier" containerID="cri-o://7e6f8cf75a10036900ef8e86f3b00305301f6e33796b7765571f13a51755c47e" gracePeriod=30 Feb 19 14:57:13 crc kubenswrapper[4861]: I0219 14:57:13.753145 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-evaluator" containerID="cri-o://a861383e343b8742fac416cba5edd3d5233ddb80244d8fbdb069184f9441cce4" gracePeriod=30 Feb 19 14:57:13 crc kubenswrapper[4861]: I0219 14:57:13.784409 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.281820719 podStartE2EDuration="7.784393412s" podCreationTimestamp="2026-02-19 14:57:06 +0000 UTC" firstStartedPulling="2026-02-19 14:57:06.979567383 +0000 UTC m=+6441.640670611" lastFinishedPulling="2026-02-19 14:57:12.482140066 +0000 UTC m=+6447.143243304" observedRunningTime="2026-02-19 14:57:13.783702464 +0000 UTC m=+6448.444805722" watchObservedRunningTime="2026-02-19 14:57:13.784393412 +0000 UTC m=+6448.445496630" Feb 19 14:57:14 crc kubenswrapper[4861]: I0219 14:57:14.772212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerStarted","Data":"9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c"} Feb 19 14:57:14 crc kubenswrapper[4861]: I0219 14:57:14.772746 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerStarted","Data":"3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14"} Feb 19 14:57:14 crc kubenswrapper[4861]: I0219 14:57:14.774796 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerID="a861383e343b8742fac416cba5edd3d5233ddb80244d8fbdb069184f9441cce4" exitCode=0 Feb 19 14:57:14 crc kubenswrapper[4861]: I0219 14:57:14.774825 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerID="9bd298e78bc03c229eb83e81ab246a4146a6a0e2cf1802aa1d7b313e5b265818" exitCode=0 Feb 19 14:57:14 crc kubenswrapper[4861]: I0219 14:57:14.774855 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerDied","Data":"a861383e343b8742fac416cba5edd3d5233ddb80244d8fbdb069184f9441cce4"} Feb 19 14:57:14 crc kubenswrapper[4861]: I0219 14:57:14.774882 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerDied","Data":"9bd298e78bc03c229eb83e81ab246a4146a6a0e2cf1802aa1d7b313e5b265818"} Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.807386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerStarted","Data":"67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426"} Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.808081 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-central-agent" containerID="cri-o://00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3" gracePeriod=30 Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.808362 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.808746 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="proxy-httpd" containerID="cri-o://67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426" gracePeriod=30 Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.808808 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="sg-core" containerID="cri-o://9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c" gracePeriod=30 Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.808854 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-notification-agent" containerID="cri-o://3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14" gracePeriod=30 Feb 19 14:57:16 crc kubenswrapper[4861]: I0219 14:57:16.846036 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.684335258 podStartE2EDuration="6.846009316s" podCreationTimestamp="2026-02-19 14:57:10 +0000 UTC" firstStartedPulling="2026-02-19 14:57:11.604682852 +0000 UTC m=+6446.265786080" lastFinishedPulling="2026-02-19 14:57:15.76635691 +0000 UTC m=+6450.427460138" observedRunningTime="2026-02-19 14:57:16.834633521 +0000 UTC m=+6451.495736769" watchObservedRunningTime="2026-02-19 14:57:16.846009316 +0000 UTC m=+6451.507112574" Feb 19 14:57:17 crc kubenswrapper[4861]: I0219 14:57:17.862501 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerID="67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426" exitCode=0 Feb 19 14:57:17 crc kubenswrapper[4861]: I0219 14:57:17.864081 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerID="9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c" exitCode=2 Feb 19 14:57:17 crc kubenswrapper[4861]: I0219 14:57:17.864123 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerID="3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14" exitCode=0 Feb 19 14:57:17 crc kubenswrapper[4861]: I0219 14:57:17.862633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerDied","Data":"67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426"} Feb 19 14:57:17 crc kubenswrapper[4861]: I0219 14:57:17.864203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerDied","Data":"9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c"} Feb 19 14:57:17 crc kubenswrapper[4861]: I0219 14:57:17.864230 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerDied","Data":"3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14"} Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.586699 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.692935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-sg-core-conf-yaml\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693028 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-combined-ca-bundle\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693096 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-run-httpd\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693190 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72rzp\" (UniqueName: \"kubernetes.io/projected/5db1f221-04d1-4f9a-81c4-1865068b0394-kube-api-access-72rzp\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693226 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-log-httpd\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693309 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-scripts\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-config-data\") pod \"5db1f221-04d1-4f9a-81c4-1865068b0394\" (UID: \"5db1f221-04d1-4f9a-81c4-1865068b0394\") " Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.693717 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.694332 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.694692 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.698955 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db1f221-04d1-4f9a-81c4-1865068b0394-kube-api-access-72rzp" (OuterVolumeSpecName: "kube-api-access-72rzp") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "kube-api-access-72rzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.701719 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-scripts" (OuterVolumeSpecName: "scripts") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.731349 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.788924 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.797114 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.797151 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.797168 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72rzp\" (UniqueName: \"kubernetes.io/projected/5db1f221-04d1-4f9a-81c4-1865068b0394-kube-api-access-72rzp\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.797186 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db1f221-04d1-4f9a-81c4-1865068b0394-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.797201 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.810040 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-config-data" (OuterVolumeSpecName: "config-data") pod "5db1f221-04d1-4f9a-81c4-1865068b0394" (UID: "5db1f221-04d1-4f9a-81c4-1865068b0394"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.899219 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db1f221-04d1-4f9a-81c4-1865068b0394-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.905149 4861 generic.go:334] "Generic (PLEG): container finished" podID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerID="00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3" exitCode=0 Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.905194 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerDied","Data":"00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3"} Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.905275 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db1f221-04d1-4f9a-81c4-1865068b0394","Type":"ContainerDied","Data":"ac0e818f8364329ca0b3dadd3df74902ceb4f5eb298d2814f8c7b52009a1b10c"} Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.905307 4861 scope.go:117] "RemoveContainer" containerID="67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.905306 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.929229 4861 scope.go:117] "RemoveContainer" containerID="9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.967579 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.970851 4861 scope.go:117] "RemoveContainer" containerID="3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.986505 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.998510 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:20 crc kubenswrapper[4861]: E0219 14:57:20.999208 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-central-agent" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.999234 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-central-agent" Feb 19 14:57:20 crc kubenswrapper[4861]: E0219 14:57:20.999248 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-notification-agent" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.999258 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-notification-agent" Feb 19 14:57:20 crc kubenswrapper[4861]: E0219 14:57:20.999276 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="sg-core" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.999286 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="sg-core" Feb 19 14:57:20 crc kubenswrapper[4861]: E0219 14:57:20.999322 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="proxy-httpd" Feb 19 14:57:20 crc kubenswrapper[4861]: I0219 14:57:20.999330 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="proxy-httpd" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:20.999603 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-notification-agent" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:20.999624 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="ceilometer-central-agent" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:20.999735 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="proxy-httpd" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:20.999764 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" containerName="sg-core" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.003218 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.008253 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.025154 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.025179 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.047641 4861 scope.go:117] "RemoveContainer" containerID="00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.067701 4861 scope.go:117] "RemoveContainer" containerID="67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426" Feb 19 14:57:21 crc kubenswrapper[4861]: E0219 14:57:21.070305 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426\": container with ID starting with 67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426 not found: ID does not exist" containerID="67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.070437 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426"} err="failed to get container status \"67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426\": rpc error: code = NotFound desc = could not find container \"67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426\": container with ID starting with 67ccc2b09084feb4c5a9b2a26bdfe847fc24c65690af42254121074406b56426 not found: ID does not exist" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.070521 4861 scope.go:117] "RemoveContainer" containerID="9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c" Feb 19 14:57:21 crc kubenswrapper[4861]: E0219 14:57:21.071100 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c\": container with ID starting with 9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c not found: ID does not exist" containerID="9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.071165 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c"} err="failed to get container status \"9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c\": rpc error: code = NotFound desc = could not find container \"9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c\": container with ID starting with 9791093d871cfb90744bcd0ee7ae408eb6ca315eea3a0f69dce7af073be20b6c not found: ID does not exist" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.071253 4861 scope.go:117] "RemoveContainer" containerID="3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14" Feb 19 14:57:21 crc kubenswrapper[4861]: E0219 14:57:21.072045 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14\": container with ID starting with 3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14 not found: ID does not exist" containerID="3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.072137 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14"} err="failed to get container status \"3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14\": rpc error: code = NotFound desc = could not find container \"3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14\": container with ID starting with 3b35d407e76c8c85b6a5fd6cc1371d814ab67cabf16907bbe565d3491e626a14 not found: ID does not exist" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.072227 4861 scope.go:117] "RemoveContainer" containerID="00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3" Feb 19 14:57:21 crc kubenswrapper[4861]: E0219 14:57:21.075732 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3\": container with ID starting with 00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3 not found: ID does not exist" containerID="00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.075854 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3"} err="failed to get container status \"00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3\": rpc error: code = NotFound desc = could not find container \"00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3\": container with ID starting with 00c86f1fa50178a597e49bb1ee5e22419cbec18adff0254a7d4d8ec85eebc3c3 not found: ID does not exist" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.104939 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nqr\" (UniqueName: \"kubernetes.io/projected/8258a5b9-b0b7-4280-a253-020556e8809b-kube-api-access-h8nqr\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.105021 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-config-data\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.105055 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-log-httpd\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.105076 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-scripts\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.105104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.105131 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.105219 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-run-httpd\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.209637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8nqr\" (UniqueName: \"kubernetes.io/projected/8258a5b9-b0b7-4280-a253-020556e8809b-kube-api-access-h8nqr\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.210045 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-config-data\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.210097 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-log-httpd\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.210140 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-scripts\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.210196 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.210244 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.210363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-run-httpd\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.211031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-log-httpd\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.211328 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-run-httpd\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.217585 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-config-data\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.221077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.230109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.238200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8nqr\" (UniqueName: \"kubernetes.io/projected/8258a5b9-b0b7-4280-a253-020556e8809b-kube-api-access-h8nqr\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.239011 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-scripts\") pod \"ceilometer-0\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.344476 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:21 crc kubenswrapper[4861]: I0219 14:57:21.905443 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:21 crc kubenswrapper[4861]: W0219 14:57:21.909299 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8258a5b9_b0b7_4280_a253_020556e8809b.slice/crio-0fbad70e2f3dfd58160502f3c267cb61d3de75c0a2ba027cba02103d628f4789 WatchSource:0}: Error finding container 0fbad70e2f3dfd58160502f3c267cb61d3de75c0a2ba027cba02103d628f4789: Status 404 returned error can't find the container with id 0fbad70e2f3dfd58160502f3c267cb61d3de75c0a2ba027cba02103d628f4789 Feb 19 14:57:22 crc kubenswrapper[4861]: I0219 14:57:22.011601 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db1f221-04d1-4f9a-81c4-1865068b0394" path="/var/lib/kubelet/pods/5db1f221-04d1-4f9a-81c4-1865068b0394/volumes" Feb 19 14:57:22 crc kubenswrapper[4861]: I0219 14:57:22.942395 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerStarted","Data":"0fbad70e2f3dfd58160502f3c267cb61d3de75c0a2ba027cba02103d628f4789"} Feb 19 14:57:23 crc kubenswrapper[4861]: I0219 14:57:23.960923 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerStarted","Data":"713178ad100ce2e32ae2145dcb130720f51c255ad89e87b0fc8483c11cb7c563"} Feb 19 14:57:23 crc kubenswrapper[4861]: I0219 14:57:23.961387 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerStarted","Data":"f0149e35aa0febd02a69d9dc19459953809219f7f334a3474e29889be60988f5"} Feb 19 14:57:24 crc kubenswrapper[4861]: I0219 14:57:24.974762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerStarted","Data":"94a99dc032854df4176d068fd8a52c61366cb4595f79308cdd73feacd3d87b59"} Feb 19 14:57:25 crc kubenswrapper[4861]: I0219 14:57:25.998337 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerStarted","Data":"1f04d358aee6d838a8e0e381beca97fb8e294d5fa60c2c909dd62c7a9e9c3a3b"} Feb 19 14:57:26 crc kubenswrapper[4861]: I0219 14:57:26.062777 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.261411179 podStartE2EDuration="6.062754906s" podCreationTimestamp="2026-02-19 14:57:20 +0000 UTC" firstStartedPulling="2026-02-19 14:57:21.911638262 +0000 UTC m=+6456.572741490" lastFinishedPulling="2026-02-19 14:57:25.712981969 +0000 UTC m=+6460.374085217" observedRunningTime="2026-02-19 14:57:26.048546354 +0000 UTC m=+6460.709649582" watchObservedRunningTime="2026-02-19 14:57:26.062754906 +0000 UTC m=+6460.723858144" Feb 19 14:57:27 crc kubenswrapper[4861]: I0219 14:57:27.008033 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 14:57:33 crc kubenswrapper[4861]: I0219 14:57:33.834555 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:57:33 crc kubenswrapper[4861]: I0219 14:57:33.835106 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.091176 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wzrkr"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.107669 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zlhgv"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.120091 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-04b6-account-create-update-pw2nv"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.129341 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9410-account-create-update-hvc27"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.137367 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5629-account-create-update-zdltx"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.145875 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wzrkr"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.154027 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-b22hw"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.162140 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zlhgv"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.170824 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-b22hw"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.179162 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9410-account-create-update-hvc27"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.186335 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5629-account-create-update-zdltx"] Feb 19 14:57:42 crc kubenswrapper[4861]: I0219 14:57:42.193863 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-04b6-account-create-update-pw2nv"] Feb 19 14:57:43 crc kubenswrapper[4861]: I0219 14:57:43.991242 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2868caf1-998c-44de-b9c6-c3ad464c4f7f" path="/var/lib/kubelet/pods/2868caf1-998c-44de-b9c6-c3ad464c4f7f/volumes" Feb 19 14:57:43 crc kubenswrapper[4861]: I0219 14:57:43.993595 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3599e87c-f3bd-48ad-9d62-7cd4bc6d8037" path="/var/lib/kubelet/pods/3599e87c-f3bd-48ad-9d62-7cd4bc6d8037/volumes" Feb 19 14:57:43 crc kubenswrapper[4861]: I0219 14:57:43.994867 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c622f47-8fca-46f0-a55e-665ff2e9525b" path="/var/lib/kubelet/pods/4c622f47-8fca-46f0-a55e-665ff2e9525b/volumes" Feb 19 14:57:43 crc kubenswrapper[4861]: I0219 14:57:43.995617 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e0f064-3c37-4b96-abc3-46a9bcc19c5a" path="/var/lib/kubelet/pods/55e0f064-3c37-4b96-abc3-46a9bcc19c5a/volumes" Feb 19 14:57:43 crc kubenswrapper[4861]: I0219 14:57:43.997469 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c831148c-6e67-4340-95f8-4017bc3de758" path="/var/lib/kubelet/pods/c831148c-6e67-4340-95f8-4017bc3de758/volumes" Feb 19 14:57:43 crc kubenswrapper[4861]: I0219 14:57:43.998825 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb92ada7-9b41-44b6-a299-e9a9a2b5f257" path="/var/lib/kubelet/pods/eb92ada7-9b41-44b6-a299-e9a9a2b5f257/volumes" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.189086 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerID="f315601e284bdf42aa51a4506ac0e7901431744a102d0e398f4cd063833bd323" exitCode=137 Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.189394 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerID="7e6f8cf75a10036900ef8e86f3b00305301f6e33796b7765571f13a51755c47e" exitCode=137 Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.189283 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerDied","Data":"f315601e284bdf42aa51a4506ac0e7901431744a102d0e398f4cd063833bd323"} Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.189455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerDied","Data":"7e6f8cf75a10036900ef8e86f3b00305301f6e33796b7765571f13a51755c47e"} Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.189473 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8fd4f3-eb70-4979-9dc4-63796e85f703","Type":"ContainerDied","Data":"bc9593c2e27d08cbc74c91c27683ba16153d7653f95c3a8d69e28a575edbf87a"} Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.189483 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9593c2e27d08cbc74c91c27683ba16153d7653f95c3a8d69e28a575edbf87a" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.249780 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.428666 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-combined-ca-bundle\") pod \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.428763 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92jv\" (UniqueName: \"kubernetes.io/projected/1d8fd4f3-eb70-4979-9dc4-63796e85f703-kube-api-access-h92jv\") pod \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.428891 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-config-data\") pod \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.429031 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-scripts\") pod \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\" (UID: \"1d8fd4f3-eb70-4979-9dc4-63796e85f703\") " Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.436395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-scripts" (OuterVolumeSpecName: "scripts") pod "1d8fd4f3-eb70-4979-9dc4-63796e85f703" (UID: "1d8fd4f3-eb70-4979-9dc4-63796e85f703"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.438664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8fd4f3-eb70-4979-9dc4-63796e85f703-kube-api-access-h92jv" (OuterVolumeSpecName: "kube-api-access-h92jv") pod "1d8fd4f3-eb70-4979-9dc4-63796e85f703" (UID: "1d8fd4f3-eb70-4979-9dc4-63796e85f703"). InnerVolumeSpecName "kube-api-access-h92jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.533634 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92jv\" (UniqueName: \"kubernetes.io/projected/1d8fd4f3-eb70-4979-9dc4-63796e85f703-kube-api-access-h92jv\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.533710 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.546877 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-config-data" (OuterVolumeSpecName: "config-data") pod "1d8fd4f3-eb70-4979-9dc4-63796e85f703" (UID: "1d8fd4f3-eb70-4979-9dc4-63796e85f703"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.574330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d8fd4f3-eb70-4979-9dc4-63796e85f703" (UID: "1d8fd4f3-eb70-4979-9dc4-63796e85f703"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.635574 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:44 crc kubenswrapper[4861]: I0219 14:57:44.635616 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fd4f3-eb70-4979-9dc4-63796e85f703-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.198164 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.241183 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.251444 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.266864 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:45 crc kubenswrapper[4861]: E0219 14:57:45.267275 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-listener" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267293 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-listener" Feb 19 14:57:45 crc kubenswrapper[4861]: E0219 14:57:45.267309 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-notifier" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267316 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-notifier" Feb 19 14:57:45 crc kubenswrapper[4861]: E0219 14:57:45.267327 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-api" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267333 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-api" Feb 19 14:57:45 crc kubenswrapper[4861]: E0219 14:57:45.267353 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-evaluator" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267358 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-evaluator" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267574 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-evaluator" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267588 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-api" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267603 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-notifier" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.267618 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" containerName="aodh-listener" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.269351 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.272087 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6d297" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.272258 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.273255 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.273814 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.275245 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.279767 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.453228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-combined-ca-bundle\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.453279 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-scripts\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.453300 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj98j\" (UniqueName: \"kubernetes.io/projected/30f6475f-97a7-464d-9aed-7949f9ae6d45-kube-api-access-cj98j\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.453398 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-public-tls-certs\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.453536 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-config-data\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.453553 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-internal-tls-certs\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.555805 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-config-data\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.555867 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-internal-tls-certs\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.555993 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-combined-ca-bundle\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.556057 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-scripts\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.556093 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj98j\" (UniqueName: \"kubernetes.io/projected/30f6475f-97a7-464d-9aed-7949f9ae6d45-kube-api-access-cj98j\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.556163 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-public-tls-certs\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.560562 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-combined-ca-bundle\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.560584 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-scripts\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.561484 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-internal-tls-certs\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.561672 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-config-data\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.563086 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f6475f-97a7-464d-9aed-7949f9ae6d45-public-tls-certs\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.576676 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj98j\" (UniqueName: \"kubernetes.io/projected/30f6475f-97a7-464d-9aed-7949f9ae6d45-kube-api-access-cj98j\") pod \"aodh-0\" (UID: \"30f6475f-97a7-464d-9aed-7949f9ae6d45\") " pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.597497 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 14:57:45 crc kubenswrapper[4861]: I0219 14:57:45.999178 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8fd4f3-eb70-4979-9dc4-63796e85f703" path="/var/lib/kubelet/pods/1d8fd4f3-eb70-4979-9dc4-63796e85f703/volumes" Feb 19 14:57:46 crc kubenswrapper[4861]: I0219 14:57:46.130353 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 14:57:46 crc kubenswrapper[4861]: I0219 14:57:46.208862 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"30f6475f-97a7-464d-9aed-7949f9ae6d45","Type":"ContainerStarted","Data":"41f805712fac30d3213b1dac99156db434848ccb7e93f93d4fd4b8cb039c5574"} Feb 19 14:57:47 crc kubenswrapper[4861]: I0219 14:57:47.219159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"30f6475f-97a7-464d-9aed-7949f9ae6d45","Type":"ContainerStarted","Data":"f0b5c74b6caa533c20d35eb9c18851022852ed5e8563684cdccb21a655989a8e"} Feb 19 14:57:48 crc kubenswrapper[4861]: I0219 14:57:48.247632 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"30f6475f-97a7-464d-9aed-7949f9ae6d45","Type":"ContainerStarted","Data":"267754f1b689d1ab720fc88f4439a2ecb75fc44319c54ba559bd96d90fad9e44"} Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.805292 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b9974fcc9-dsdpf"] Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.807386 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.818795 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b9974fcc9-dsdpf"] Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.854448 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.968268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-config\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.968630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-dns-svc\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.968735 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-openstack-cell1\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.968849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-nb\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.969051 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-sb\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:49 crc kubenswrapper[4861]: I0219 14:57:49.969158 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bm6c\" (UniqueName: \"kubernetes.io/projected/86d72108-520b-43d7-ad3a-f746b72b07e9-kube-api-access-9bm6c\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.072880 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-config\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.073004 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-dns-svc\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.073027 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-openstack-cell1\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.073056 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-nb\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.073145 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-sb\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.073169 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bm6c\" (UniqueName: \"kubernetes.io/projected/86d72108-520b-43d7-ad3a-f746b72b07e9-kube-api-access-9bm6c\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.074057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-dns-svc\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.074085 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-config\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.074159 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-openstack-cell1\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.074740 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-sb\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.074781 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-nb\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.093672 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bm6c\" (UniqueName: \"kubernetes.io/projected/86d72108-520b-43d7-ad3a-f746b72b07e9-kube-api-access-9bm6c\") pod \"dnsmasq-dns-b9974fcc9-dsdpf\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.174132 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.278590 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"30f6475f-97a7-464d-9aed-7949f9ae6d45","Type":"ContainerStarted","Data":"6252c395cbd697b690b9541325c883c691b1ba4209f1074b70ef1ecc5c00f48a"} Feb 19 14:57:50 crc kubenswrapper[4861]: I0219 14:57:50.725315 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b9974fcc9-dsdpf"] Feb 19 14:57:51 crc kubenswrapper[4861]: W0219 14:57:51.097192 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86d72108_520b_43d7_ad3a_f746b72b07e9.slice/crio-8250ee4d19258f8c3a7e670f995a19c5fe2c9aab4a9d1a7a7016821c5ef5e633 WatchSource:0}: Error finding container 8250ee4d19258f8c3a7e670f995a19c5fe2c9aab4a9d1a7a7016821c5ef5e633: Status 404 returned error can't find the container with id 8250ee4d19258f8c3a7e670f995a19c5fe2c9aab4a9d1a7a7016821c5ef5e633 Feb 19 14:57:51 crc kubenswrapper[4861]: I0219 14:57:51.297961 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" event={"ID":"86d72108-520b-43d7-ad3a-f746b72b07e9","Type":"ContainerStarted","Data":"8250ee4d19258f8c3a7e670f995a19c5fe2c9aab4a9d1a7a7016821c5ef5e633"} Feb 19 14:57:51 crc kubenswrapper[4861]: I0219 14:57:51.387369 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 14:57:52 crc kubenswrapper[4861]: I0219 14:57:52.055739 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-27wrk"] Feb 19 14:57:52 crc kubenswrapper[4861]: I0219 14:57:52.065905 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-27wrk"] Feb 19 14:57:52 crc kubenswrapper[4861]: I0219 14:57:52.311890 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" event={"ID":"86d72108-520b-43d7-ad3a-f746b72b07e9","Type":"ContainerStarted","Data":"2cb8211f5016fd26eb2973f02b744cfcf5c6374b3c671d752b6abdb71de69d84"} Feb 19 14:57:52 crc kubenswrapper[4861]: I0219 14:57:52.315586 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"30f6475f-97a7-464d-9aed-7949f9ae6d45","Type":"ContainerStarted","Data":"bacf0b7d6d575f108f5a715bf76125c82978b3946300ecef31aaf650df3ad2bd"} Feb 19 14:57:52 crc kubenswrapper[4861]: I0219 14:57:52.381812 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.195768356 podStartE2EDuration="7.381791097s" podCreationTimestamp="2026-02-19 14:57:45 +0000 UTC" firstStartedPulling="2026-02-19 14:57:46.129747899 +0000 UTC m=+6480.790851147" lastFinishedPulling="2026-02-19 14:57:50.31577066 +0000 UTC m=+6484.976873888" observedRunningTime="2026-02-19 14:57:52.362614731 +0000 UTC m=+6487.023717999" watchObservedRunningTime="2026-02-19 14:57:52.381791097 +0000 UTC m=+6487.042894325" Feb 19 14:57:53 crc kubenswrapper[4861]: I0219 14:57:53.326959 4861 generic.go:334] "Generic (PLEG): container finished" podID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerID="2cb8211f5016fd26eb2973f02b744cfcf5c6374b3c671d752b6abdb71de69d84" exitCode=0 Feb 19 14:57:53 crc kubenswrapper[4861]: I0219 14:57:53.327052 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" event={"ID":"86d72108-520b-43d7-ad3a-f746b72b07e9","Type":"ContainerDied","Data":"2cb8211f5016fd26eb2973f02b744cfcf5c6374b3c671d752b6abdb71de69d84"} Feb 19 14:57:54 crc kubenswrapper[4861]: I0219 14:57:54.014264 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9064be2-3d5c-4d2d-88f0-4873a276ebd6" path="/var/lib/kubelet/pods/b9064be2-3d5c-4d2d-88f0-4873a276ebd6/volumes" Feb 19 14:57:54 crc kubenswrapper[4861]: I0219 14:57:54.350108 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" event={"ID":"86d72108-520b-43d7-ad3a-f746b72b07e9","Type":"ContainerStarted","Data":"d7c4a2d52b0f8c459e7e14e6c574152c928e10e817ad4297e8a4d413e949a03b"} Feb 19 14:57:54 crc kubenswrapper[4861]: I0219 14:57:54.351674 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:57:54 crc kubenswrapper[4861]: I0219 14:57:54.383178 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" podStartSLOduration=5.383157205 podStartE2EDuration="5.383157205s" podCreationTimestamp="2026-02-19 14:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:57:54.36695254 +0000 UTC m=+6489.028055778" watchObservedRunningTime="2026-02-19 14:57:54.383157205 +0000 UTC m=+6489.044260433" Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.002273 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.002911 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" containerName="kube-state-metrics" containerID="cri-o://101faf0da9c14fe0b0136451f31cf4fe42963b560e8cd7a597bf3e0a4a9e97fd" gracePeriod=30 Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.370245 4861 generic.go:334] "Generic (PLEG): container finished" podID="3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" containerID="101faf0da9c14fe0b0136451f31cf4fe42963b560e8cd7a597bf3e0a4a9e97fd" exitCode=2 Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.370310 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948","Type":"ContainerDied","Data":"101faf0da9c14fe0b0136451f31cf4fe42963b560e8cd7a597bf3e0a4a9e97fd"} Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.667502 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.868260 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlk4c\" (UniqueName: \"kubernetes.io/projected/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948-kube-api-access-jlk4c\") pod \"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948\" (UID: \"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948\") " Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.875111 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948-kube-api-access-jlk4c" (OuterVolumeSpecName: "kube-api-access-jlk4c") pod "3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" (UID: "3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948"). InnerVolumeSpecName "kube-api-access-jlk4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:57:56 crc kubenswrapper[4861]: I0219 14:57:56.973326 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlk4c\" (UniqueName: \"kubernetes.io/projected/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948-kube-api-access-jlk4c\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.380742 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948","Type":"ContainerDied","Data":"568a64327aa5ed311c32ce0a23a6eea7040d22663198c5520a3ebdad61b4f578"} Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.380797 4861 scope.go:117] "RemoveContainer" containerID="101faf0da9c14fe0b0136451f31cf4fe42963b560e8cd7a597bf3e0a4a9e97fd" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.380828 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.420827 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.435409 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.452686 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:57:57 crc kubenswrapper[4861]: E0219 14:57:57.453289 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" containerName="kube-state-metrics" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.453312 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" containerName="kube-state-metrics" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.453564 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" containerName="kube-state-metrics" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.454380 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.460360 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.460659 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.464651 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.596219 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.596265 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.596318 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.596340 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shs9m\" (UniqueName: \"kubernetes.io/projected/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-api-access-shs9m\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.698694 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.698811 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.698916 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.698949 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shs9m\" (UniqueName: \"kubernetes.io/projected/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-api-access-shs9m\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.704581 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.707649 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.710661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.726003 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shs9m\" (UniqueName: \"kubernetes.io/projected/910c8f52-335a-40bf-b5d5-ae475656c55b-kube-api-access-shs9m\") pod \"kube-state-metrics-0\" (UID: \"910c8f52-335a-40bf-b5d5-ae475656c55b\") " pod="openstack/kube-state-metrics-0" Feb 19 14:57:57 crc kubenswrapper[4861]: I0219 14:57:57.819826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.015331 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948" path="/var/lib/kubelet/pods/3aeb9d2e-df2b-4d6b-abe7-e0c4b77c2948/volumes" Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.016342 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.016811 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-central-agent" containerID="cri-o://f0149e35aa0febd02a69d9dc19459953809219f7f334a3474e29889be60988f5" gracePeriod=30 Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.017006 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="proxy-httpd" containerID="cri-o://1f04d358aee6d838a8e0e381beca97fb8e294d5fa60c2c909dd62c7a9e9c3a3b" gracePeriod=30 Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.017012 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="sg-core" containerID="cri-o://94a99dc032854df4176d068fd8a52c61366cb4595f79308cdd73feacd3d87b59" gracePeriod=30 Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.017126 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-notification-agent" containerID="cri-o://713178ad100ce2e32ae2145dcb130720f51c255ad89e87b0fc8483c11cb7c563" gracePeriod=30 Feb 19 14:57:58 crc kubenswrapper[4861]: W0219 14:57:58.317078 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod910c8f52_335a_40bf_b5d5_ae475656c55b.slice/crio-c177e6e05ad28458e20df81b3685c70f7df41a52bcc273850ef4a7c3716516b3 WatchSource:0}: Error finding container c177e6e05ad28458e20df81b3685c70f7df41a52bcc273850ef4a7c3716516b3: Status 404 returned error can't find the container with id c177e6e05ad28458e20df81b3685c70f7df41a52bcc273850ef4a7c3716516b3 Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.321405 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.392035 4861 generic.go:334] "Generic (PLEG): container finished" podID="8258a5b9-b0b7-4280-a253-020556e8809b" containerID="94a99dc032854df4176d068fd8a52c61366cb4595f79308cdd73feacd3d87b59" exitCode=2 Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.392078 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerDied","Data":"94a99dc032854df4176d068fd8a52c61366cb4595f79308cdd73feacd3d87b59"} Feb 19 14:57:58 crc kubenswrapper[4861]: I0219 14:57:58.393474 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"910c8f52-335a-40bf-b5d5-ae475656c55b","Type":"ContainerStarted","Data":"c177e6e05ad28458e20df81b3685c70f7df41a52bcc273850ef4a7c3716516b3"} Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.416102 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"910c8f52-335a-40bf-b5d5-ae475656c55b","Type":"ContainerStarted","Data":"32e0cf700651f5a47dbb7b0771ab98c1b16ab86709170b5a748588a6725cf9cb"} Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.416780 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.422287 4861 generic.go:334] "Generic (PLEG): container finished" podID="8258a5b9-b0b7-4280-a253-020556e8809b" containerID="1f04d358aee6d838a8e0e381beca97fb8e294d5fa60c2c909dd62c7a9e9c3a3b" exitCode=0 Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.422329 4861 generic.go:334] "Generic (PLEG): container finished" podID="8258a5b9-b0b7-4280-a253-020556e8809b" containerID="713178ad100ce2e32ae2145dcb130720f51c255ad89e87b0fc8483c11cb7c563" exitCode=0 Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.422340 4861 generic.go:334] "Generic (PLEG): container finished" podID="8258a5b9-b0b7-4280-a253-020556e8809b" containerID="f0149e35aa0febd02a69d9dc19459953809219f7f334a3474e29889be60988f5" exitCode=0 Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.422366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerDied","Data":"1f04d358aee6d838a8e0e381beca97fb8e294d5fa60c2c909dd62c7a9e9c3a3b"} Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.422397 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerDied","Data":"713178ad100ce2e32ae2145dcb130720f51c255ad89e87b0fc8483c11cb7c563"} Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.422411 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerDied","Data":"f0149e35aa0febd02a69d9dc19459953809219f7f334a3474e29889be60988f5"} Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.445733 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.069340836 podStartE2EDuration="2.445708888s" podCreationTimestamp="2026-02-19 14:57:57 +0000 UTC" firstStartedPulling="2026-02-19 14:57:58.321576246 +0000 UTC m=+6492.982679474" lastFinishedPulling="2026-02-19 14:57:58.697944298 +0000 UTC m=+6493.359047526" observedRunningTime="2026-02-19 14:57:59.429928804 +0000 UTC m=+6494.091032052" watchObservedRunningTime="2026-02-19 14:57:59.445708888 +0000 UTC m=+6494.106812126" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.594993 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.749367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-sg-core-conf-yaml\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.749596 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-combined-ca-bundle\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.750525 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-run-httpd\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.750636 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-scripts\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.750818 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-config-data\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.750935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8nqr\" (UniqueName: \"kubernetes.io/projected/8258a5b9-b0b7-4280-a253-020556e8809b-kube-api-access-h8nqr\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.750816 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.750985 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-log-httpd\") pod \"8258a5b9-b0b7-4280-a253-020556e8809b\" (UID: \"8258a5b9-b0b7-4280-a253-020556e8809b\") " Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.752382 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.752694 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.757828 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-scripts" (OuterVolumeSpecName: "scripts") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.763800 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8258a5b9-b0b7-4280-a253-020556e8809b-kube-api-access-h8nqr" (OuterVolumeSpecName: "kube-api-access-h8nqr") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "kube-api-access-h8nqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.821233 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.853940 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.853968 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8nqr\" (UniqueName: \"kubernetes.io/projected/8258a5b9-b0b7-4280-a253-020556e8809b-kube-api-access-h8nqr\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.853978 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8258a5b9-b0b7-4280-a253-020556e8809b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.853986 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.873696 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-config-data" (OuterVolumeSpecName: "config-data") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.887517 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8258a5b9-b0b7-4280-a253-020556e8809b" (UID: "8258a5b9-b0b7-4280-a253-020556e8809b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.955799 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:57:59 crc kubenswrapper[4861]: I0219 14:57:59.955843 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8258a5b9-b0b7-4280-a253-020556e8809b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.176472 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.235357 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79985d99f7-gzt8f"] Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.235667 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerName="dnsmasq-dns" containerID="cri-o://c1e95a226e0b3a44ae9147da639ea26164321a982554cb366f5cfafde6131be7" gracePeriod=10 Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.384873 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fccdd6f49-vt4bh"] Feb 19 14:58:00 crc kubenswrapper[4861]: E0219 14:58:00.385670 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-notification-agent" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385687 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-notification-agent" Feb 19 14:58:00 crc kubenswrapper[4861]: E0219 14:58:00.385698 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-central-agent" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385704 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-central-agent" Feb 19 14:58:00 crc kubenswrapper[4861]: E0219 14:58:00.385748 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="sg-core" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385755 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="sg-core" Feb 19 14:58:00 crc kubenswrapper[4861]: E0219 14:58:00.385774 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="proxy-httpd" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385779 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="proxy-httpd" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385972 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="proxy-httpd" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385987 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-central-agent" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.385997 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="ceilometer-notification-agent" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.386011 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" containerName="sg-core" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.387290 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.395272 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fccdd6f49-vt4bh"] Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.441297 4861 generic.go:334] "Generic (PLEG): container finished" podID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerID="c1e95a226e0b3a44ae9147da639ea26164321a982554cb366f5cfafde6131be7" exitCode=0 Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.441437 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" event={"ID":"9dea5b00-6936-48f1-a8aa-aea402e7a2ce","Type":"ContainerDied","Data":"c1e95a226e0b3a44ae9147da639ea26164321a982554cb366f5cfafde6131be7"} Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.443962 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8258a5b9-b0b7-4280-a253-020556e8809b","Type":"ContainerDied","Data":"0fbad70e2f3dfd58160502f3c267cb61d3de75c0a2ba027cba02103d628f4789"} Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.444051 4861 scope.go:117] "RemoveContainer" containerID="1f04d358aee6d838a8e0e381beca97fb8e294d5fa60c2c909dd62c7a9e9c3a3b" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.444652 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.478369 4861 scope.go:117] "RemoveContainer" containerID="94a99dc032854df4176d068fd8a52c61366cb4595f79308cdd73feacd3d87b59" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.496464 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.520915 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.533139 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.535306 4861 scope.go:117] "RemoveContainer" containerID="713178ad100ce2e32ae2145dcb130720f51c255ad89e87b0fc8483c11cb7c563" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.541198 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.543255 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.547297 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.549173 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.549720 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.554371 4861 scope.go:117] "RemoveContainer" containerID="f0149e35aa0febd02a69d9dc19459953809219f7f334a3474e29889be60988f5" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594510 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594560 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594598 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c2df18c-c07d-43f7-9cef-373ea36b3c27-log-httpd\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594686 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-scripts\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594771 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-ovsdbserver-sb\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594796 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-ovsdbserver-nb\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594908 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-config-data\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.594985 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7m5\" (UniqueName: \"kubernetes.io/projected/0c2df18c-c07d-43f7-9cef-373ea36b3c27-kube-api-access-rs7m5\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.595018 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zjf\" (UniqueName: \"kubernetes.io/projected/7750e32e-cbbc-44ea-85ca-d3df49562c97-kube-api-access-v9zjf\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.595101 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-config\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.595203 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.595226 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-openstack-cell1\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.595251 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-dns-svc\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.595304 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c2df18c-c07d-43f7-9cef-373ea36b3c27-run-httpd\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.696260 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.696301 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-openstack-cell1\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.696317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-dns-svc\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697079 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-openstack-cell1\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697233 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c2df18c-c07d-43f7-9cef-373ea36b3c27-run-httpd\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697390 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-dns-svc\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697562 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c2df18c-c07d-43f7-9cef-373ea36b3c27-run-httpd\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697699 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697716 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.697732 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c2df18c-c07d-43f7-9cef-373ea36b3c27-log-httpd\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-scripts\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698754 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-ovsdbserver-sb\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698773 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-ovsdbserver-nb\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698827 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-config-data\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698858 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7m5\" (UniqueName: \"kubernetes.io/projected/0c2df18c-c07d-43f7-9cef-373ea36b3c27-kube-api-access-rs7m5\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698879 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zjf\" (UniqueName: \"kubernetes.io/projected/7750e32e-cbbc-44ea-85ca-d3df49562c97-kube-api-access-v9zjf\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698976 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-config\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.698688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c2df18c-c07d-43f7-9cef-373ea36b3c27-log-httpd\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.699820 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-ovsdbserver-sb\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.700716 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-ovsdbserver-nb\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.701055 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7750e32e-cbbc-44ea-85ca-d3df49562c97-config\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.702564 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.703817 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-scripts\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.705911 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.708403 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-config-data\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.710822 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c2df18c-c07d-43f7-9cef-373ea36b3c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.714312 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7m5\" (UniqueName: \"kubernetes.io/projected/0c2df18c-c07d-43f7-9cef-373ea36b3c27-kube-api-access-rs7m5\") pod \"ceilometer-0\" (UID: \"0c2df18c-c07d-43f7-9cef-373ea36b3c27\") " pod="openstack/ceilometer-0" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.731149 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zjf\" (UniqueName: \"kubernetes.io/projected/7750e32e-cbbc-44ea-85ca-d3df49562c97-kube-api-access-v9zjf\") pod \"dnsmasq-dns-6fccdd6f49-vt4bh\" (UID: \"7750e32e-cbbc-44ea-85ca-d3df49562c97\") " pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:00 crc kubenswrapper[4861]: I0219 14:58:00.864905 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.019935 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.078819 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.220404 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-dns-svc\") pod \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.220545 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-nb\") pod \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.220616 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j67wj\" (UniqueName: \"kubernetes.io/projected/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-kube-api-access-j67wj\") pod \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.220766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-config\") pod \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.220893 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-sb\") pod \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\" (UID: \"9dea5b00-6936-48f1-a8aa-aea402e7a2ce\") " Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.226243 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-kube-api-access-j67wj" (OuterVolumeSpecName: "kube-api-access-j67wj") pod "9dea5b00-6936-48f1-a8aa-aea402e7a2ce" (UID: "9dea5b00-6936-48f1-a8aa-aea402e7a2ce"). InnerVolumeSpecName "kube-api-access-j67wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.292062 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9dea5b00-6936-48f1-a8aa-aea402e7a2ce" (UID: "9dea5b00-6936-48f1-a8aa-aea402e7a2ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.293658 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9dea5b00-6936-48f1-a8aa-aea402e7a2ce" (UID: "9dea5b00-6936-48f1-a8aa-aea402e7a2ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.299280 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9dea5b00-6936-48f1-a8aa-aea402e7a2ce" (UID: "9dea5b00-6936-48f1-a8aa-aea402e7a2ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.306096 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-config" (OuterVolumeSpecName: "config") pod "9dea5b00-6936-48f1-a8aa-aea402e7a2ce" (UID: "9dea5b00-6936-48f1-a8aa-aea402e7a2ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.323632 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.323686 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.323695 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.323707 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j67wj\" (UniqueName: \"kubernetes.io/projected/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-kube-api-access-j67wj\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.323716 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dea5b00-6936-48f1-a8aa-aea402e7a2ce-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.457462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" event={"ID":"9dea5b00-6936-48f1-a8aa-aea402e7a2ce","Type":"ContainerDied","Data":"dbf413750f026f5426032f6410fb5515d65140f42a5f3e2b337b7c177143cece"} Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.457521 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985d99f7-gzt8f" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.457710 4861 scope.go:117] "RemoveContainer" containerID="c1e95a226e0b3a44ae9147da639ea26164321a982554cb366f5cfafde6131be7" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.471495 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.489041 4861 scope.go:117] "RemoveContainer" containerID="1f7deaefc3dbefd71372fcfe106d7844e7551a3175824504064e614a65d6df66" Feb 19 14:58:01 crc kubenswrapper[4861]: W0219 14:58:01.493796 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c2df18c_c07d_43f7_9cef_373ea36b3c27.slice/crio-9d329d057ecf7c3fec9e5e9a429cd90b650f18b56e1dc2c8b878cfa63f0f5b48 WatchSource:0}: Error finding container 9d329d057ecf7c3fec9e5e9a429cd90b650f18b56e1dc2c8b878cfa63f0f5b48: Status 404 returned error can't find the container with id 9d329d057ecf7c3fec9e5e9a429cd90b650f18b56e1dc2c8b878cfa63f0f5b48 Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.495238 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79985d99f7-gzt8f"] Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.505135 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79985d99f7-gzt8f"] Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.565955 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fccdd6f49-vt4bh"] Feb 19 14:58:01 crc kubenswrapper[4861]: W0219 14:58:01.566011 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7750e32e_cbbc_44ea_85ca_d3df49562c97.slice/crio-8edc18646d4e52c0c9f094e5524ab06110190d96e2ac957a217f87f8d9fc629a WatchSource:0}: Error finding container 8edc18646d4e52c0c9f094e5524ab06110190d96e2ac957a217f87f8d9fc629a: Status 404 returned error can't find the container with id 8edc18646d4e52c0c9f094e5524ab06110190d96e2ac957a217f87f8d9fc629a Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.993765 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8258a5b9-b0b7-4280-a253-020556e8809b" path="/var/lib/kubelet/pods/8258a5b9-b0b7-4280-a253-020556e8809b/volumes" Feb 19 14:58:01 crc kubenswrapper[4861]: I0219 14:58:01.995839 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" path="/var/lib/kubelet/pods/9dea5b00-6936-48f1-a8aa-aea402e7a2ce/volumes" Feb 19 14:58:02 crc kubenswrapper[4861]: I0219 14:58:02.468084 4861 generic.go:334] "Generic (PLEG): container finished" podID="7750e32e-cbbc-44ea-85ca-d3df49562c97" containerID="ad972d591f672965fe2b31b6aace32f4e46266aeea20e804035689fe0a0599ea" exitCode=0 Feb 19 14:58:02 crc kubenswrapper[4861]: I0219 14:58:02.468154 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" event={"ID":"7750e32e-cbbc-44ea-85ca-d3df49562c97","Type":"ContainerDied","Data":"ad972d591f672965fe2b31b6aace32f4e46266aeea20e804035689fe0a0599ea"} Feb 19 14:58:02 crc kubenswrapper[4861]: I0219 14:58:02.469475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" event={"ID":"7750e32e-cbbc-44ea-85ca-d3df49562c97","Type":"ContainerStarted","Data":"8edc18646d4e52c0c9f094e5524ab06110190d96e2ac957a217f87f8d9fc629a"} Feb 19 14:58:02 crc kubenswrapper[4861]: I0219 14:58:02.471275 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c2df18c-c07d-43f7-9cef-373ea36b3c27","Type":"ContainerStarted","Data":"e3f03c730576bbacfc1ce698467da47068256db1d99f975ee02638b4f509b34c"} Feb 19 14:58:02 crc kubenswrapper[4861]: I0219 14:58:02.471310 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c2df18c-c07d-43f7-9cef-373ea36b3c27","Type":"ContainerStarted","Data":"9d329d057ecf7c3fec9e5e9a429cd90b650f18b56e1dc2c8b878cfa63f0f5b48"} Feb 19 14:58:03 crc kubenswrapper[4861]: I0219 14:58:03.482550 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" event={"ID":"7750e32e-cbbc-44ea-85ca-d3df49562c97","Type":"ContainerStarted","Data":"4f67342a102458b17c39ace586a35d84ee9129fc0e5a701bb99553821c58a7e7"} Feb 19 14:58:03 crc kubenswrapper[4861]: I0219 14:58:03.483034 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:03 crc kubenswrapper[4861]: I0219 14:58:03.485818 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c2df18c-c07d-43f7-9cef-373ea36b3c27","Type":"ContainerStarted","Data":"a81b7d318b2fa7d4ed929fd8a7650d9a3d5461a1e49673bb08edb0e771656664"} Feb 19 14:58:03 crc kubenswrapper[4861]: I0219 14:58:03.515641 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" podStartSLOduration=3.515623181 podStartE2EDuration="3.515623181s" podCreationTimestamp="2026-02-19 14:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:58:03.506866066 +0000 UTC m=+6498.167969294" watchObservedRunningTime="2026-02-19 14:58:03.515623181 +0000 UTC m=+6498.176726409" Feb 19 14:58:03 crc kubenswrapper[4861]: I0219 14:58:03.834577 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:58:03 crc kubenswrapper[4861]: I0219 14:58:03.834925 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:58:04 crc kubenswrapper[4861]: I0219 14:58:04.512300 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c2df18c-c07d-43f7-9cef-373ea36b3c27","Type":"ContainerStarted","Data":"f6866719afa6d71e241fbc0fc3de58ac5f4b10a2cc1e0e66a88573d18cc12971"} Feb 19 14:58:06 crc kubenswrapper[4861]: I0219 14:58:06.544271 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c2df18c-c07d-43f7-9cef-373ea36b3c27","Type":"ContainerStarted","Data":"b5b149cad3cb4bf12d9d7c50816c561e02ffed1860ac88f76bfc2ac7d864b401"} Feb 19 14:58:06 crc kubenswrapper[4861]: I0219 14:58:06.544781 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 14:58:06 crc kubenswrapper[4861]: I0219 14:58:06.576924 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.541400868 podStartE2EDuration="6.576903006s" podCreationTimestamp="2026-02-19 14:58:00 +0000 UTC" firstStartedPulling="2026-02-19 14:58:01.498327444 +0000 UTC m=+6496.159430682" lastFinishedPulling="2026-02-19 14:58:05.533829592 +0000 UTC m=+6500.194932820" observedRunningTime="2026-02-19 14:58:06.562734455 +0000 UTC m=+6501.223837693" watchObservedRunningTime="2026-02-19 14:58:06.576903006 +0000 UTC m=+6501.238006244" Feb 19 14:58:07 crc kubenswrapper[4861]: I0219 14:58:07.841254 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.353008 4861 scope.go:117] "RemoveContainer" containerID="8539c5cee392cf73f961402d608c37234a247147d6905d577616eb54bd55db08" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.391183 4861 scope.go:117] "RemoveContainer" containerID="25a21e514de4c59f309ee4e71cf3f5b079807df8c823ef1c7005e75af7676730" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.461316 4861 scope.go:117] "RemoveContainer" containerID="211bd0f97186a58151749062a4af4ff2be57d07e7bd3f5eda9d106f9045b6876" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.529830 4861 scope.go:117] "RemoveContainer" containerID="6f9618ef7c6ed44c353388f03201a1a8ea4d929648f01eced70499d618e67fe7" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.574561 4861 scope.go:117] "RemoveContainer" containerID="be405e319f9a142fbf13af7e8b162f99857acccd524dd831fe3406dfcc98b031" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.622821 4861 scope.go:117] "RemoveContainer" containerID="53f6a4cd431133f720f5a925bb9cfc15f0062624c0a2ea06377924cb031b525f" Feb 19 14:58:09 crc kubenswrapper[4861]: I0219 14:58:09.683708 4861 scope.go:117] "RemoveContainer" containerID="1d90921feac41dfae182ea825d511dbb5e0875a4f1ec3297a5b3adaf3c41138c" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.024717 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fccdd6f49-vt4bh" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.043951 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xdc8j"] Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.055305 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xdc8j"] Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.113791 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b9974fcc9-dsdpf"] Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.114075 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerName="dnsmasq-dns" containerID="cri-o://d7c4a2d52b0f8c459e7e14e6c574152c928e10e817ad4297e8a4d413e949a03b" gracePeriod=10 Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.633172 4861 generic.go:334] "Generic (PLEG): container finished" podID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerID="d7c4a2d52b0f8c459e7e14e6c574152c928e10e817ad4297e8a4d413e949a03b" exitCode=0 Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.633267 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" event={"ID":"86d72108-520b-43d7-ad3a-f746b72b07e9","Type":"ContainerDied","Data":"d7c4a2d52b0f8c459e7e14e6c574152c928e10e817ad4297e8a4d413e949a03b"} Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.633541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" event={"ID":"86d72108-520b-43d7-ad3a-f746b72b07e9","Type":"ContainerDied","Data":"8250ee4d19258f8c3a7e670f995a19c5fe2c9aab4a9d1a7a7016821c5ef5e633"} Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.633558 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8250ee4d19258f8c3a7e670f995a19c5fe2c9aab4a9d1a7a7016821c5ef5e633" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.653735 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.778198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-sb\") pod \"86d72108-520b-43d7-ad3a-f746b72b07e9\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.778377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-nb\") pod \"86d72108-520b-43d7-ad3a-f746b72b07e9\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.778474 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-openstack-cell1\") pod \"86d72108-520b-43d7-ad3a-f746b72b07e9\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.778597 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-dns-svc\") pod \"86d72108-520b-43d7-ad3a-f746b72b07e9\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.778627 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-config\") pod \"86d72108-520b-43d7-ad3a-f746b72b07e9\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.778724 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bm6c\" (UniqueName: \"kubernetes.io/projected/86d72108-520b-43d7-ad3a-f746b72b07e9-kube-api-access-9bm6c\") pod \"86d72108-520b-43d7-ad3a-f746b72b07e9\" (UID: \"86d72108-520b-43d7-ad3a-f746b72b07e9\") " Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.784490 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d72108-520b-43d7-ad3a-f746b72b07e9-kube-api-access-9bm6c" (OuterVolumeSpecName: "kube-api-access-9bm6c") pod "86d72108-520b-43d7-ad3a-f746b72b07e9" (UID: "86d72108-520b-43d7-ad3a-f746b72b07e9"). InnerVolumeSpecName "kube-api-access-9bm6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.835076 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-config" (OuterVolumeSpecName: "config") pod "86d72108-520b-43d7-ad3a-f746b72b07e9" (UID: "86d72108-520b-43d7-ad3a-f746b72b07e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.842958 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "86d72108-520b-43d7-ad3a-f746b72b07e9" (UID: "86d72108-520b-43d7-ad3a-f746b72b07e9"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.847079 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86d72108-520b-43d7-ad3a-f746b72b07e9" (UID: "86d72108-520b-43d7-ad3a-f746b72b07e9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.880098 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86d72108-520b-43d7-ad3a-f746b72b07e9" (UID: "86d72108-520b-43d7-ad3a-f746b72b07e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.881724 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.881755 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-config\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.881765 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.881780 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bm6c\" (UniqueName: \"kubernetes.io/projected/86d72108-520b-43d7-ad3a-f746b72b07e9-kube-api-access-9bm6c\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.881770 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86d72108-520b-43d7-ad3a-f746b72b07e9" (UID: "86d72108-520b-43d7-ad3a-f746b72b07e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.881793 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.992788 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86d72108-520b-43d7-ad3a-f746b72b07e9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:11 crc kubenswrapper[4861]: I0219 14:58:11.993342 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb8387f-bbdc-4001-af0d-c9aa517e829a" path="/var/lib/kubelet/pods/5cb8387f-bbdc-4001-af0d-c9aa517e829a/volumes" Feb 19 14:58:12 crc kubenswrapper[4861]: I0219 14:58:12.050470 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wmlzl"] Feb 19 14:58:12 crc kubenswrapper[4861]: I0219 14:58:12.056592 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wmlzl"] Feb 19 14:58:12 crc kubenswrapper[4861]: I0219 14:58:12.642404 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9974fcc9-dsdpf" Feb 19 14:58:12 crc kubenswrapper[4861]: I0219 14:58:12.679695 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b9974fcc9-dsdpf"] Feb 19 14:58:12 crc kubenswrapper[4861]: I0219 14:58:12.694493 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b9974fcc9-dsdpf"] Feb 19 14:58:13 crc kubenswrapper[4861]: I0219 14:58:13.994416 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" path="/var/lib/kubelet/pods/86d72108-520b-43d7-ad3a-f746b72b07e9/volumes" Feb 19 14:58:13 crc kubenswrapper[4861]: I0219 14:58:13.995964 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa46daf6-db14-4383-8df7-79bcbc7e8cac" path="/var/lib/kubelet/pods/fa46daf6-db14-4383-8df7-79bcbc7e8cac/volumes" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.389059 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt"] Feb 19 14:58:20 crc kubenswrapper[4861]: E0219 14:58:20.390089 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerName="init" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.390107 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerName="init" Feb 19 14:58:20 crc kubenswrapper[4861]: E0219 14:58:20.390137 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerName="init" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.390145 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerName="init" Feb 19 14:58:20 crc kubenswrapper[4861]: E0219 14:58:20.390166 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerName="dnsmasq-dns" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.390187 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerName="dnsmasq-dns" Feb 19 14:58:20 crc kubenswrapper[4861]: E0219 14:58:20.390210 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerName="dnsmasq-dns" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.390218 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerName="dnsmasq-dns" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.390504 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d72108-520b-43d7-ad3a-f746b72b07e9" containerName="dnsmasq-dns" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.390521 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dea5b00-6936-48f1-a8aa-aea402e7a2ce" containerName="dnsmasq-dns" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.391441 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.397223 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.397601 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.397958 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.401090 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.446058 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt"] Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.499640 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.499739 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.499846 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcndw\" (UniqueName: \"kubernetes.io/projected/8af32add-4795-4445-be16-96d51882b8ea-kube-api-access-jcndw\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.499950 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.602116 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.602272 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.602311 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.602355 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcndw\" (UniqueName: \"kubernetes.io/projected/8af32add-4795-4445-be16-96d51882b8ea-kube-api-access-jcndw\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.640460 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.640500 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.648486 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.648703 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcndw\" (UniqueName: \"kubernetes.io/projected/8af32add-4795-4445-be16-96d51882b8ea-kube-api-access-jcndw\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chspxt\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:20 crc kubenswrapper[4861]: I0219 14:58:20.741700 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:21 crc kubenswrapper[4861]: W0219 14:58:21.426655 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8af32add_4795_4445_be16_96d51882b8ea.slice/crio-b4749deff5963200e5f4de36247de7a02daa048d8d571965a7d6b503742511f3 WatchSource:0}: Error finding container b4749deff5963200e5f4de36247de7a02daa048d8d571965a7d6b503742511f3: Status 404 returned error can't find the container with id b4749deff5963200e5f4de36247de7a02daa048d8d571965a7d6b503742511f3 Feb 19 14:58:21 crc kubenswrapper[4861]: I0219 14:58:21.430150 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt"] Feb 19 14:58:21 crc kubenswrapper[4861]: I0219 14:58:21.746348 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" event={"ID":"8af32add-4795-4445-be16-96d51882b8ea","Type":"ContainerStarted","Data":"b4749deff5963200e5f4de36247de7a02daa048d8d571965a7d6b503742511f3"} Feb 19 14:58:30 crc kubenswrapper[4861]: I0219 14:58:30.044785 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-28jg8"] Feb 19 14:58:30 crc kubenswrapper[4861]: I0219 14:58:30.057863 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-28jg8"] Feb 19 14:58:30 crc kubenswrapper[4861]: I0219 14:58:30.883368 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 14:58:31 crc kubenswrapper[4861]: I0219 14:58:31.990839 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a657122f-164e-4978-96c1-b7e9007f76ad" path="/var/lib/kubelet/pods/a657122f-164e-4978-96c1-b7e9007f76ad/volumes" Feb 19 14:58:32 crc kubenswrapper[4861]: I0219 14:58:32.866678 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" event={"ID":"8af32add-4795-4445-be16-96d51882b8ea","Type":"ContainerStarted","Data":"353b02ddf94dda819484262b9d9b84cba95de9f07e988cc6149c83f336986815"} Feb 19 14:58:32 crc kubenswrapper[4861]: I0219 14:58:32.920504 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" podStartSLOduration=2.659901534 podStartE2EDuration="12.920468975s" podCreationTimestamp="2026-02-19 14:58:20 +0000 UTC" firstStartedPulling="2026-02-19 14:58:21.430816063 +0000 UTC m=+6516.091919331" lastFinishedPulling="2026-02-19 14:58:31.691383524 +0000 UTC m=+6526.352486772" observedRunningTime="2026-02-19 14:58:32.893154182 +0000 UTC m=+6527.554257430" watchObservedRunningTime="2026-02-19 14:58:32.920468975 +0000 UTC m=+6527.581572213" Feb 19 14:58:33 crc kubenswrapper[4861]: I0219 14:58:33.837132 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:58:33 crc kubenswrapper[4861]: I0219 14:58:33.837235 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:58:33 crc kubenswrapper[4861]: I0219 14:58:33.837319 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 14:58:33 crc kubenswrapper[4861]: I0219 14:58:33.839094 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d003f985eed6ad9b7bc08457548854ca76bfd5dbc339051aa16497b1baa9fde"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:58:33 crc kubenswrapper[4861]: I0219 14:58:33.839260 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://7d003f985eed6ad9b7bc08457548854ca76bfd5dbc339051aa16497b1baa9fde" gracePeriod=600 Feb 19 14:58:34 crc kubenswrapper[4861]: I0219 14:58:34.891810 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="7d003f985eed6ad9b7bc08457548854ca76bfd5dbc339051aa16497b1baa9fde" exitCode=0 Feb 19 14:58:34 crc kubenswrapper[4861]: I0219 14:58:34.891901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"7d003f985eed6ad9b7bc08457548854ca76bfd5dbc339051aa16497b1baa9fde"} Feb 19 14:58:34 crc kubenswrapper[4861]: I0219 14:58:34.892538 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607"} Feb 19 14:58:34 crc kubenswrapper[4861]: I0219 14:58:34.892581 4861 scope.go:117] "RemoveContainer" containerID="502a4549a2baa2b1572925937a0eb20e1bd1517278b2629c4572ef8e5d2f113d" Feb 19 14:58:46 crc kubenswrapper[4861]: I0219 14:58:46.038113 4861 generic.go:334] "Generic (PLEG): container finished" podID="8af32add-4795-4445-be16-96d51882b8ea" containerID="353b02ddf94dda819484262b9d9b84cba95de9f07e988cc6149c83f336986815" exitCode=0 Feb 19 14:58:46 crc kubenswrapper[4861]: I0219 14:58:46.038186 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" event={"ID":"8af32add-4795-4445-be16-96d51882b8ea","Type":"ContainerDied","Data":"353b02ddf94dda819484262b9d9b84cba95de9f07e988cc6149c83f336986815"} Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.588564 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.685543 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-ssh-key-openstack-cell1\") pod \"8af32add-4795-4445-be16-96d51882b8ea\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.685684 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-inventory\") pod \"8af32add-4795-4445-be16-96d51882b8ea\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.685833 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcndw\" (UniqueName: \"kubernetes.io/projected/8af32add-4795-4445-be16-96d51882b8ea-kube-api-access-jcndw\") pod \"8af32add-4795-4445-be16-96d51882b8ea\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.685905 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-pre-adoption-validation-combined-ca-bundle\") pod \"8af32add-4795-4445-be16-96d51882b8ea\" (UID: \"8af32add-4795-4445-be16-96d51882b8ea\") " Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.692502 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "8af32add-4795-4445-be16-96d51882b8ea" (UID: "8af32add-4795-4445-be16-96d51882b8ea"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.693711 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af32add-4795-4445-be16-96d51882b8ea-kube-api-access-jcndw" (OuterVolumeSpecName: "kube-api-access-jcndw") pod "8af32add-4795-4445-be16-96d51882b8ea" (UID: "8af32add-4795-4445-be16-96d51882b8ea"). InnerVolumeSpecName "kube-api-access-jcndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.716664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-inventory" (OuterVolumeSpecName: "inventory") pod "8af32add-4795-4445-be16-96d51882b8ea" (UID: "8af32add-4795-4445-be16-96d51882b8ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.722503 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8af32add-4795-4445-be16-96d51882b8ea" (UID: "8af32add-4795-4445-be16-96d51882b8ea"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.788440 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcndw\" (UniqueName: \"kubernetes.io/projected/8af32add-4795-4445-be16-96d51882b8ea-kube-api-access-jcndw\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.788462 4861 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.788474 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:47 crc kubenswrapper[4861]: I0219 14:58:47.788484 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af32add-4795-4445-be16-96d51882b8ea-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 14:58:48 crc kubenswrapper[4861]: I0219 14:58:48.072497 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" Feb 19 14:58:48 crc kubenswrapper[4861]: I0219 14:58:48.073511 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chspxt" event={"ID":"8af32add-4795-4445-be16-96d51882b8ea","Type":"ContainerDied","Data":"b4749deff5963200e5f4de36247de7a02daa048d8d571965a7d6b503742511f3"} Feb 19 14:58:48 crc kubenswrapper[4861]: I0219 14:58:48.073554 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4749deff5963200e5f4de36247de7a02daa048d8d571965a7d6b503742511f3" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.678173 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf"] Feb 19 14:58:53 crc kubenswrapper[4861]: E0219 14:58:53.680274 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af32add-4795-4445-be16-96d51882b8ea" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.680308 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af32add-4795-4445-be16-96d51882b8ea" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.681218 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af32add-4795-4445-be16-96d51882b8ea" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.682902 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.687310 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.687722 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.688061 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.696238 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.710626 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf"] Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.841117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.841254 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.841438 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.841548 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpjt\" (UniqueName: \"kubernetes.io/projected/6ef1f749-73bc-4049-ba56-e022f58ca9d9-kube-api-access-wvpjt\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.943566 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.943671 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.943769 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvpjt\" (UniqueName: \"kubernetes.io/projected/6ef1f749-73bc-4049-ba56-e022f58ca9d9-kube-api-access-wvpjt\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.943853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.951224 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.951513 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.960800 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:53 crc kubenswrapper[4861]: I0219 14:58:53.961978 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvpjt\" (UniqueName: \"kubernetes.io/projected/6ef1f749-73bc-4049-ba56-e022f58ca9d9-kube-api-access-wvpjt\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:54 crc kubenswrapper[4861]: I0219 14:58:54.017919 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 14:58:54 crc kubenswrapper[4861]: I0219 14:58:54.611076 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf"] Feb 19 14:58:55 crc kubenswrapper[4861]: I0219 14:58:55.181795 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" event={"ID":"6ef1f749-73bc-4049-ba56-e022f58ca9d9","Type":"ContainerStarted","Data":"c7320322cfb1b0744cf48d77b54d43ecb18f531db357f931289d1273fd66ca6c"} Feb 19 14:58:56 crc kubenswrapper[4861]: I0219 14:58:56.193777 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" event={"ID":"6ef1f749-73bc-4049-ba56-e022f58ca9d9","Type":"ContainerStarted","Data":"c37052c6cd56fd78a35c79a7b270d9caa18d6fb4159c74dd5b9e39dac705deda"} Feb 19 14:58:56 crc kubenswrapper[4861]: I0219 14:58:56.226021 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" podStartSLOduration=2.788482114 podStartE2EDuration="3.225996658s" podCreationTimestamp="2026-02-19 14:58:53 +0000 UTC" firstStartedPulling="2026-02-19 14:58:54.60950178 +0000 UTC m=+6549.270605018" lastFinishedPulling="2026-02-19 14:58:55.047016294 +0000 UTC m=+6549.708119562" observedRunningTime="2026-02-19 14:58:56.218321923 +0000 UTC m=+6550.879425191" watchObservedRunningTime="2026-02-19 14:58:56.225996658 +0000 UTC m=+6550.887099906" Feb 19 14:59:09 crc kubenswrapper[4861]: I0219 14:59:09.938305 4861 scope.go:117] "RemoveContainer" containerID="48d346ae6aa9141034d2505c31f4ffe2e8e6141ee509925eedbef977de667241" Feb 19 14:59:09 crc kubenswrapper[4861]: I0219 14:59:09.987623 4861 scope.go:117] "RemoveContainer" containerID="96539a8b84e39cc7e392a1f33a0decc9c5af559d058701e46eaf6d886a719170" Feb 19 14:59:10 crc kubenswrapper[4861]: I0219 14:59:10.061759 4861 scope.go:117] "RemoveContainer" containerID="eea00f8a95bd2e50aebc8f8bacd0f8d43e3b775eb31231554a291504efd96a47" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.552191 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks"] Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.554214 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.557347 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.557490 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.561585 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks"] Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.651887 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d47748-45c0-444b-b34b-5e8cf3aee659-config-volume\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.651964 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdz2\" (UniqueName: \"kubernetes.io/projected/58d47748-45c0-444b-b34b-5e8cf3aee659-kube-api-access-kxdz2\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.652101 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d47748-45c0-444b-b34b-5e8cf3aee659-secret-volume\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.753640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdz2\" (UniqueName: \"kubernetes.io/projected/58d47748-45c0-444b-b34b-5e8cf3aee659-kube-api-access-kxdz2\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.754025 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d47748-45c0-444b-b34b-5e8cf3aee659-secret-volume\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.754924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d47748-45c0-444b-b34b-5e8cf3aee659-config-volume\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.755635 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d47748-45c0-444b-b34b-5e8cf3aee659-config-volume\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.759706 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d47748-45c0-444b-b34b-5e8cf3aee659-secret-volume\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.773083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdz2\" (UniqueName: \"kubernetes.io/projected/58d47748-45c0-444b-b34b-5e8cf3aee659-kube-api-access-kxdz2\") pod \"collect-profiles-29525220-mgnks\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:00 crc kubenswrapper[4861]: I0219 15:00:00.893531 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:01 crc kubenswrapper[4861]: I0219 15:00:01.474322 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks"] Feb 19 15:00:02 crc kubenswrapper[4861]: I0219 15:00:02.010447 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" event={"ID":"58d47748-45c0-444b-b34b-5e8cf3aee659","Type":"ContainerStarted","Data":"07403846a42b034adc191551f6a5d1f1d90bab596c6f8dba445f647e0af9d399"} Feb 19 15:00:02 crc kubenswrapper[4861]: I0219 15:00:02.010498 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" event={"ID":"58d47748-45c0-444b-b34b-5e8cf3aee659","Type":"ContainerStarted","Data":"89e635d73b7586a608af4ed28644d2d6a19491f43a2a6021ec60acbfabc7bdfa"} Feb 19 15:00:03 crc kubenswrapper[4861]: I0219 15:00:03.027050 4861 generic.go:334] "Generic (PLEG): container finished" podID="58d47748-45c0-444b-b34b-5e8cf3aee659" containerID="07403846a42b034adc191551f6a5d1f1d90bab596c6f8dba445f647e0af9d399" exitCode=0 Feb 19 15:00:03 crc kubenswrapper[4861]: I0219 15:00:03.027174 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" event={"ID":"58d47748-45c0-444b-b34b-5e8cf3aee659","Type":"ContainerDied","Data":"07403846a42b034adc191551f6a5d1f1d90bab596c6f8dba445f647e0af9d399"} Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.409273 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.451917 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d47748-45c0-444b-b34b-5e8cf3aee659-config-volume\") pod \"58d47748-45c0-444b-b34b-5e8cf3aee659\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.452503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d47748-45c0-444b-b34b-5e8cf3aee659-secret-volume\") pod \"58d47748-45c0-444b-b34b-5e8cf3aee659\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.452572 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxdz2\" (UniqueName: \"kubernetes.io/projected/58d47748-45c0-444b-b34b-5e8cf3aee659-kube-api-access-kxdz2\") pod \"58d47748-45c0-444b-b34b-5e8cf3aee659\" (UID: \"58d47748-45c0-444b-b34b-5e8cf3aee659\") " Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.452881 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d47748-45c0-444b-b34b-5e8cf3aee659-config-volume" (OuterVolumeSpecName: "config-volume") pod "58d47748-45c0-444b-b34b-5e8cf3aee659" (UID: "58d47748-45c0-444b-b34b-5e8cf3aee659"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.453746 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d47748-45c0-444b-b34b-5e8cf3aee659-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.457999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d47748-45c0-444b-b34b-5e8cf3aee659-kube-api-access-kxdz2" (OuterVolumeSpecName: "kube-api-access-kxdz2") pod "58d47748-45c0-444b-b34b-5e8cf3aee659" (UID: "58d47748-45c0-444b-b34b-5e8cf3aee659"). InnerVolumeSpecName "kube-api-access-kxdz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.458080 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d47748-45c0-444b-b34b-5e8cf3aee659-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "58d47748-45c0-444b-b34b-5e8cf3aee659" (UID: "58d47748-45c0-444b-b34b-5e8cf3aee659"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.556163 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d47748-45c0-444b-b34b-5e8cf3aee659-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:00:04 crc kubenswrapper[4861]: I0219 15:00:04.556230 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxdz2\" (UniqueName: \"kubernetes.io/projected/58d47748-45c0-444b-b34b-5e8cf3aee659-kube-api-access-kxdz2\") on node \"crc\" DevicePath \"\"" Feb 19 15:00:05 crc kubenswrapper[4861]: I0219 15:00:05.059873 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" event={"ID":"58d47748-45c0-444b-b34b-5e8cf3aee659","Type":"ContainerDied","Data":"89e635d73b7586a608af4ed28644d2d6a19491f43a2a6021ec60acbfabc7bdfa"} Feb 19 15:00:05 crc kubenswrapper[4861]: I0219 15:00:05.059922 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e635d73b7586a608af4ed28644d2d6a19491f43a2a6021ec60acbfabc7bdfa" Feb 19 15:00:05 crc kubenswrapper[4861]: I0219 15:00:05.059989 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks" Feb 19 15:00:05 crc kubenswrapper[4861]: I0219 15:00:05.149461 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn"] Feb 19 15:00:05 crc kubenswrapper[4861]: I0219 15:00:05.158598 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525175-bg4xn"] Feb 19 15:00:05 crc kubenswrapper[4861]: I0219 15:00:05.996772 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bcf75d3-3362-4134-9bc0-89fc873610a5" path="/var/lib/kubelet/pods/8bcf75d3-3362-4134-9bc0-89fc873610a5/volumes" Feb 19 15:00:10 crc kubenswrapper[4861]: I0219 15:00:10.226266 4861 scope.go:117] "RemoveContainer" containerID="c421d19a9e489a719dd9f5df4197d90121a2f4d8d1fd3526599c0fcecef15254" Feb 19 15:00:17 crc kubenswrapper[4861]: I0219 15:00:17.044618 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-trfks"] Feb 19 15:00:17 crc kubenswrapper[4861]: I0219 15:00:17.057808 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-trfks"] Feb 19 15:00:17 crc kubenswrapper[4861]: I0219 15:00:17.997298 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24702168-bec0-4c40-8c69-d813349d00df" path="/var/lib/kubelet/pods/24702168-bec0-4c40-8c69-d813349d00df/volumes" Feb 19 15:00:18 crc kubenswrapper[4861]: I0219 15:00:18.087994 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-f9ad-account-create-update-lcn8p"] Feb 19 15:00:18 crc kubenswrapper[4861]: I0219 15:00:18.104750 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-f9ad-account-create-update-lcn8p"] Feb 19 15:00:19 crc kubenswrapper[4861]: I0219 15:00:19.998958 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a13647-b213-48d9-b56e-c1d2f25623d3" path="/var/lib/kubelet/pods/66a13647-b213-48d9-b56e-c1d2f25623d3/volumes" Feb 19 15:00:24 crc kubenswrapper[4861]: I0219 15:00:24.052656 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-vzj56"] Feb 19 15:00:24 crc kubenswrapper[4861]: I0219 15:00:24.072031 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-vzj56"] Feb 19 15:00:25 crc kubenswrapper[4861]: I0219 15:00:25.051463 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-1022-account-create-update-zl65c"] Feb 19 15:00:25 crc kubenswrapper[4861]: I0219 15:00:25.068271 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-1022-account-create-update-zl65c"] Feb 19 15:00:26 crc kubenswrapper[4861]: I0219 15:00:26.000659 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8" path="/var/lib/kubelet/pods/68deba7a-8cf5-418a-a2f3-0c51a2b6a0c8/volumes" Feb 19 15:00:26 crc kubenswrapper[4861]: I0219 15:00:26.002390 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b075c9b9-2e89-4f64-aa2b-8abe896117e3" path="/var/lib/kubelet/pods/b075c9b9-2e89-4f64-aa2b-8abe896117e3/volumes" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.154669 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525221-cbp6w"] Feb 19 15:01:00 crc kubenswrapper[4861]: E0219 15:01:00.155683 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d47748-45c0-444b-b34b-5e8cf3aee659" containerName="collect-profiles" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.155699 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d47748-45c0-444b-b34b-5e8cf3aee659" containerName="collect-profiles" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.155958 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d47748-45c0-444b-b34b-5e8cf3aee659" containerName="collect-profiles" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.156800 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.292291 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-fernet-keys\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.292507 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddzg\" (UniqueName: \"kubernetes.io/projected/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-kube-api-access-5ddzg\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.292538 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-config-data\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.292573 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-combined-ca-bundle\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.394553 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-fernet-keys\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.394783 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddzg\" (UniqueName: \"kubernetes.io/projected/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-kube-api-access-5ddzg\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.394808 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-config-data\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.394851 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-combined-ca-bundle\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.406763 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-fernet-keys\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.411652 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-config-data\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.415568 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-combined-ca-bundle\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:00 crc kubenswrapper[4861]: I0219 15:01:00.432121 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddzg\" (UniqueName: \"kubernetes.io/projected/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-kube-api-access-5ddzg\") pod \"keystone-cron-29525221-cbp6w\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:01 crc kubenswrapper[4861]: I0219 15:01:01.024091 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:01 crc kubenswrapper[4861]: I0219 15:01:01.212922 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525221-cbp6w"] Feb 19 15:01:01 crc kubenswrapper[4861]: I0219 15:01:01.741415 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525221-cbp6w"] Feb 19 15:01:02 crc kubenswrapper[4861]: I0219 15:01:02.219465 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525221-cbp6w" event={"ID":"3c62bca6-3d72-418c-97ef-6ac12c9bbd52","Type":"ContainerStarted","Data":"292b1ecc5d82d89425f6f453db5615743a824ba50587289d7220b6cfb2a671cb"} Feb 19 15:01:02 crc kubenswrapper[4861]: I0219 15:01:02.219765 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525221-cbp6w" event={"ID":"3c62bca6-3d72-418c-97ef-6ac12c9bbd52","Type":"ContainerStarted","Data":"8cd38b26454e34ae527b3bdd076117f9a04888833a804e6443c449089486016e"} Feb 19 15:01:02 crc kubenswrapper[4861]: I0219 15:01:02.250642 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525221-cbp6w" podStartSLOduration=2.250614215 podStartE2EDuration="2.250614215s" podCreationTimestamp="2026-02-19 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:01:02.24521879 +0000 UTC m=+6676.906322058" watchObservedRunningTime="2026-02-19 15:01:02.250614215 +0000 UTC m=+6676.911717483" Feb 19 15:01:03 crc kubenswrapper[4861]: I0219 15:01:03.834073 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:01:03 crc kubenswrapper[4861]: I0219 15:01:03.834520 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:01:06 crc kubenswrapper[4861]: I0219 15:01:06.275042 4861 generic.go:334] "Generic (PLEG): container finished" podID="3c62bca6-3d72-418c-97ef-6ac12c9bbd52" containerID="292b1ecc5d82d89425f6f453db5615743a824ba50587289d7220b6cfb2a671cb" exitCode=0 Feb 19 15:01:06 crc kubenswrapper[4861]: I0219 15:01:06.275733 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525221-cbp6w" event={"ID":"3c62bca6-3d72-418c-97ef-6ac12c9bbd52","Type":"ContainerDied","Data":"292b1ecc5d82d89425f6f453db5615743a824ba50587289d7220b6cfb2a671cb"} Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.760666 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.791885 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddzg\" (UniqueName: \"kubernetes.io/projected/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-kube-api-access-5ddzg\") pod \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.791992 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-fernet-keys\") pod \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.792240 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-combined-ca-bundle\") pod \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.792292 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-config-data\") pod \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\" (UID: \"3c62bca6-3d72-418c-97ef-6ac12c9bbd52\") " Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.803180 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3c62bca6-3d72-418c-97ef-6ac12c9bbd52" (UID: "3c62bca6-3d72-418c-97ef-6ac12c9bbd52"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.812755 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-kube-api-access-5ddzg" (OuterVolumeSpecName: "kube-api-access-5ddzg") pod "3c62bca6-3d72-418c-97ef-6ac12c9bbd52" (UID: "3c62bca6-3d72-418c-97ef-6ac12c9bbd52"). InnerVolumeSpecName "kube-api-access-5ddzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.850765 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c62bca6-3d72-418c-97ef-6ac12c9bbd52" (UID: "3c62bca6-3d72-418c-97ef-6ac12c9bbd52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.865366 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-config-data" (OuterVolumeSpecName: "config-data") pod "3c62bca6-3d72-418c-97ef-6ac12c9bbd52" (UID: "3c62bca6-3d72-418c-97ef-6ac12c9bbd52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.894102 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ddzg\" (UniqueName: \"kubernetes.io/projected/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-kube-api-access-5ddzg\") on node \"crc\" DevicePath \"\"" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.894227 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.894291 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:01:07 crc kubenswrapper[4861]: I0219 15:01:07.894383 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c62bca6-3d72-418c-97ef-6ac12c9bbd52-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:01:08 crc kubenswrapper[4861]: I0219 15:01:08.308478 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525221-cbp6w" event={"ID":"3c62bca6-3d72-418c-97ef-6ac12c9bbd52","Type":"ContainerDied","Data":"8cd38b26454e34ae527b3bdd076117f9a04888833a804e6443c449089486016e"} Feb 19 15:01:08 crc kubenswrapper[4861]: I0219 15:01:08.308515 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cd38b26454e34ae527b3bdd076117f9a04888833a804e6443c449089486016e" Feb 19 15:01:08 crc kubenswrapper[4861]: I0219 15:01:08.308572 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525221-cbp6w" Feb 19 15:01:10 crc kubenswrapper[4861]: I0219 15:01:10.342403 4861 scope.go:117] "RemoveContainer" containerID="2bf842866715f1df77c320db3952c2328981756853ade3f9e8aa6ab85a45a94e" Feb 19 15:01:10 crc kubenswrapper[4861]: I0219 15:01:10.377175 4861 scope.go:117] "RemoveContainer" containerID="ff153f73e072e8c1cebaf34e0b42405f648d2ccdea5c449af038740cb85257a8" Feb 19 15:01:10 crc kubenswrapper[4861]: I0219 15:01:10.423756 4861 scope.go:117] "RemoveContainer" containerID="49cc3c3522e503f24259c22aff88f284cd2259182e7eed8d4126c65f8b8902c9" Feb 19 15:01:10 crc kubenswrapper[4861]: I0219 15:01:10.481077 4861 scope.go:117] "RemoveContainer" containerID="69c350ee8b26a532049788e2339a54d7c9c6258fd5288c9124aeb29a93953e21" Feb 19 15:01:10 crc kubenswrapper[4861]: I0219 15:01:10.555820 4861 scope.go:117] "RemoveContainer" containerID="0b3e46ca5b776f284d4b5c95256396efbc0c44d4a0423a3949461288f7b376c3" Feb 19 15:01:10 crc kubenswrapper[4861]: I0219 15:01:10.589302 4861 scope.go:117] "RemoveContainer" containerID="0d0eac7a35cf47b714239245356d377006b57b914502db663b6a9d65705b8d0d" Feb 19 15:01:19 crc kubenswrapper[4861]: I0219 15:01:19.074163 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-wzzsh"] Feb 19 15:01:19 crc kubenswrapper[4861]: I0219 15:01:19.088730 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-wzzsh"] Feb 19 15:01:19 crc kubenswrapper[4861]: I0219 15:01:19.997384 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b705df1a-4cae-4c89-af54-bbe87a267540" path="/var/lib/kubelet/pods/b705df1a-4cae-4c89-af54-bbe87a267540/volumes" Feb 19 15:01:33 crc kubenswrapper[4861]: I0219 15:01:33.833984 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:01:33 crc kubenswrapper[4861]: I0219 15:01:33.834431 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.833909 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.834807 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.834881 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.836320 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.836473 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" gracePeriod=600 Feb 19 15:02:03 crc kubenswrapper[4861]: E0219 15:02:03.968729 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.994463 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" exitCode=0 Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.998974 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607"} Feb 19 15:02:03 crc kubenswrapper[4861]: I0219 15:02:03.999079 4861 scope.go:117] "RemoveContainer" containerID="7d003f985eed6ad9b7bc08457548854ca76bfd5dbc339051aa16497b1baa9fde" Feb 19 15:02:04 crc kubenswrapper[4861]: I0219 15:02:04.001080 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:02:04 crc kubenswrapper[4861]: E0219 15:02:04.001818 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:02:10 crc kubenswrapper[4861]: I0219 15:02:10.740992 4861 scope.go:117] "RemoveContainer" containerID="373327c0beeb948c919e1787a530692b4ac95115b80692d95934777df99a091d" Feb 19 15:02:10 crc kubenswrapper[4861]: I0219 15:02:10.819076 4861 scope.go:117] "RemoveContainer" containerID="7f87fd4948c057b98d09513621d3f68e622f5994c440dc994fbbc20431cc84f8" Feb 19 15:02:16 crc kubenswrapper[4861]: I0219 15:02:16.977970 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:02:16 crc kubenswrapper[4861]: E0219 15:02:16.979034 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:02:27 crc kubenswrapper[4861]: I0219 15:02:27.977737 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:02:27 crc kubenswrapper[4861]: E0219 15:02:27.978861 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:02:40 crc kubenswrapper[4861]: I0219 15:02:40.977708 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:02:40 crc kubenswrapper[4861]: E0219 15:02:40.978696 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.748133 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2r4bv"] Feb 19 15:02:42 crc kubenswrapper[4861]: E0219 15:02:42.750002 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c62bca6-3d72-418c-97ef-6ac12c9bbd52" containerName="keystone-cron" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.750179 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c62bca6-3d72-418c-97ef-6ac12c9bbd52" containerName="keystone-cron" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.751406 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c62bca6-3d72-418c-97ef-6ac12c9bbd52" containerName="keystone-cron" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.754267 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.762538 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r4bv"] Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.796129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8khb\" (UniqueName: \"kubernetes.io/projected/17eeb4a6-a7f3-449f-b500-a8f81cd12136-kube-api-access-k8khb\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.796222 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-utilities\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.796454 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-catalog-content\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.898890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-catalog-content\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.899653 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8khb\" (UniqueName: \"kubernetes.io/projected/17eeb4a6-a7f3-449f-b500-a8f81cd12136-kube-api-access-k8khb\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.899751 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-catalog-content\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.899761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-utilities\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.900186 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-utilities\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:42 crc kubenswrapper[4861]: I0219 15:02:42.933943 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8khb\" (UniqueName: \"kubernetes.io/projected/17eeb4a6-a7f3-449f-b500-a8f81cd12136-kube-api-access-k8khb\") pod \"redhat-marketplace-2r4bv\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:43 crc kubenswrapper[4861]: I0219 15:02:43.091901 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:43 crc kubenswrapper[4861]: I0219 15:02:43.583303 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r4bv"] Feb 19 15:02:44 crc kubenswrapper[4861]: I0219 15:02:44.479758 4861 generic.go:334] "Generic (PLEG): container finished" podID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerID="242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b" exitCode=0 Feb 19 15:02:44 crc kubenswrapper[4861]: I0219 15:02:44.479861 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerDied","Data":"242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b"} Feb 19 15:02:44 crc kubenswrapper[4861]: I0219 15:02:44.480109 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerStarted","Data":"ec63f7539bf65db380f6022a36a4ad62f0268852946201c54ffd084b9cd3bec4"} Feb 19 15:02:44 crc kubenswrapper[4861]: I0219 15:02:44.483009 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:02:45 crc kubenswrapper[4861]: I0219 15:02:45.492047 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerStarted","Data":"b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1"} Feb 19 15:02:47 crc kubenswrapper[4861]: I0219 15:02:47.525922 4861 generic.go:334] "Generic (PLEG): container finished" podID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerID="b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1" exitCode=0 Feb 19 15:02:47 crc kubenswrapper[4861]: I0219 15:02:47.526159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerDied","Data":"b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1"} Feb 19 15:02:48 crc kubenswrapper[4861]: I0219 15:02:48.553571 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerStarted","Data":"97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3"} Feb 19 15:02:48 crc kubenswrapper[4861]: I0219 15:02:48.580519 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2r4bv" podStartSLOduration=2.980048538 podStartE2EDuration="6.580496538s" podCreationTimestamp="2026-02-19 15:02:42 +0000 UTC" firstStartedPulling="2026-02-19 15:02:44.48248078 +0000 UTC m=+6779.143584018" lastFinishedPulling="2026-02-19 15:02:48.08292876 +0000 UTC m=+6782.744032018" observedRunningTime="2026-02-19 15:02:48.575822802 +0000 UTC m=+6783.236926110" watchObservedRunningTime="2026-02-19 15:02:48.580496538 +0000 UTC m=+6783.241599776" Feb 19 15:02:53 crc kubenswrapper[4861]: I0219 15:02:53.092364 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:53 crc kubenswrapper[4861]: I0219 15:02:53.092869 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:02:53 crc kubenswrapper[4861]: I0219 15:02:53.978271 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:02:53 crc kubenswrapper[4861]: E0219 15:02:53.978600 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:02:54 crc kubenswrapper[4861]: I0219 15:02:54.155773 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2r4bv" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="registry-server" probeResult="failure" output=< Feb 19 15:02:54 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:02:54 crc kubenswrapper[4861]: > Feb 19 15:03:03 crc kubenswrapper[4861]: I0219 15:03:03.185870 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:03:03 crc kubenswrapper[4861]: I0219 15:03:03.265576 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:03:03 crc kubenswrapper[4861]: I0219 15:03:03.451565 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r4bv"] Feb 19 15:03:04 crc kubenswrapper[4861]: I0219 15:03:04.762061 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2r4bv" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="registry-server" containerID="cri-o://97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3" gracePeriod=2 Feb 19 15:03:04 crc kubenswrapper[4861]: I0219 15:03:04.977443 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:03:04 crc kubenswrapper[4861]: E0219 15:03:04.978170 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.265796 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.410713 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-catalog-content\") pod \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.410964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-utilities\") pod \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.411055 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8khb\" (UniqueName: \"kubernetes.io/projected/17eeb4a6-a7f3-449f-b500-a8f81cd12136-kube-api-access-k8khb\") pod \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\" (UID: \"17eeb4a6-a7f3-449f-b500-a8f81cd12136\") " Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.411897 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-utilities" (OuterVolumeSpecName: "utilities") pod "17eeb4a6-a7f3-449f-b500-a8f81cd12136" (UID: "17eeb4a6-a7f3-449f-b500-a8f81cd12136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.416207 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17eeb4a6-a7f3-449f-b500-a8f81cd12136-kube-api-access-k8khb" (OuterVolumeSpecName: "kube-api-access-k8khb") pod "17eeb4a6-a7f3-449f-b500-a8f81cd12136" (UID: "17eeb4a6-a7f3-449f-b500-a8f81cd12136"). InnerVolumeSpecName "kube-api-access-k8khb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.436872 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17eeb4a6-a7f3-449f-b500-a8f81cd12136" (UID: "17eeb4a6-a7f3-449f-b500-a8f81cd12136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.514010 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.514042 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17eeb4a6-a7f3-449f-b500-a8f81cd12136-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.514054 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8khb\" (UniqueName: \"kubernetes.io/projected/17eeb4a6-a7f3-449f-b500-a8f81cd12136-kube-api-access-k8khb\") on node \"crc\" DevicePath \"\"" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.779594 4861 generic.go:334] "Generic (PLEG): container finished" podID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerID="97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3" exitCode=0 Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.779658 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r4bv" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.779666 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerDied","Data":"97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3"} Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.779764 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r4bv" event={"ID":"17eeb4a6-a7f3-449f-b500-a8f81cd12136","Type":"ContainerDied","Data":"ec63f7539bf65db380f6022a36a4ad62f0268852946201c54ffd084b9cd3bec4"} Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.779803 4861 scope.go:117] "RemoveContainer" containerID="97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.827070 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r4bv"] Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.835840 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r4bv"] Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.836749 4861 scope.go:117] "RemoveContainer" containerID="b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.862260 4861 scope.go:117] "RemoveContainer" containerID="242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.931016 4861 scope.go:117] "RemoveContainer" containerID="97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3" Feb 19 15:03:05 crc kubenswrapper[4861]: E0219 15:03:05.931536 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3\": container with ID starting with 97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3 not found: ID does not exist" containerID="97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.931578 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3"} err="failed to get container status \"97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3\": rpc error: code = NotFound desc = could not find container \"97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3\": container with ID starting with 97713b56221c424c77bb66f5dbcd7c4ca97b450012b739eb98ed1c4219fa3bf3 not found: ID does not exist" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.931608 4861 scope.go:117] "RemoveContainer" containerID="b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1" Feb 19 15:03:05 crc kubenswrapper[4861]: E0219 15:03:05.931925 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1\": container with ID starting with b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1 not found: ID does not exist" containerID="b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.931962 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1"} err="failed to get container status \"b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1\": rpc error: code = NotFound desc = could not find container \"b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1\": container with ID starting with b08e6672a573d771aedd006aefef3c553cb30dd324c9e6e4610a972f3691b7f1 not found: ID does not exist" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.931986 4861 scope.go:117] "RemoveContainer" containerID="242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b" Feb 19 15:03:05 crc kubenswrapper[4861]: E0219 15:03:05.932231 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b\": container with ID starting with 242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b not found: ID does not exist" containerID="242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.932251 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b"} err="failed to get container status \"242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b\": rpc error: code = NotFound desc = could not find container \"242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b\": container with ID starting with 242dc7efded9392c8af9c87daaed617e5490d897acac6d9268f73bdc6545a57b not found: ID does not exist" Feb 19 15:03:05 crc kubenswrapper[4861]: I0219 15:03:05.989762 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" path="/var/lib/kubelet/pods/17eeb4a6-a7f3-449f-b500-a8f81cd12136/volumes" Feb 19 15:03:10 crc kubenswrapper[4861]: I0219 15:03:10.918890 4861 scope.go:117] "RemoveContainer" containerID="a861383e343b8742fac416cba5edd3d5233ddb80244d8fbdb069184f9441cce4" Feb 19 15:03:10 crc kubenswrapper[4861]: I0219 15:03:10.970955 4861 scope.go:117] "RemoveContainer" containerID="9bd298e78bc03c229eb83e81ab246a4146a6a0e2cf1802aa1d7b313e5b265818" Feb 19 15:03:18 crc kubenswrapper[4861]: I0219 15:03:18.977458 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:03:18 crc kubenswrapper[4861]: E0219 15:03:18.978506 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:03:29 crc kubenswrapper[4861]: I0219 15:03:29.977839 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:03:29 crc kubenswrapper[4861]: E0219 15:03:29.978861 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:03:41 crc kubenswrapper[4861]: I0219 15:03:41.977661 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:03:41 crc kubenswrapper[4861]: E0219 15:03:41.978873 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:03:54 crc kubenswrapper[4861]: I0219 15:03:54.977058 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:03:54 crc kubenswrapper[4861]: E0219 15:03:54.978153 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:03:58 crc kubenswrapper[4861]: I0219 15:03:58.048481 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-9q5rx"] Feb 19 15:03:58 crc kubenswrapper[4861]: I0219 15:03:58.057670 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3d3b-account-create-update-2bdrs"] Feb 19 15:03:58 crc kubenswrapper[4861]: I0219 15:03:58.066170 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-9q5rx"] Feb 19 15:03:58 crc kubenswrapper[4861]: I0219 15:03:58.075182 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3d3b-account-create-update-2bdrs"] Feb 19 15:04:00 crc kubenswrapper[4861]: I0219 15:03:59.989955 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617fd5bd-c1c7-438d-b89f-feeabe1449c0" path="/var/lib/kubelet/pods/617fd5bd-c1c7-438d-b89f-feeabe1449c0/volumes" Feb 19 15:04:00 crc kubenswrapper[4861]: I0219 15:03:59.990592 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84ea320-1dee-4473-a4f7-e4c879e76f0d" path="/var/lib/kubelet/pods/e84ea320-1dee-4473-a4f7-e4c879e76f0d/volumes" Feb 19 15:04:08 crc kubenswrapper[4861]: I0219 15:04:08.978150 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:04:08 crc kubenswrapper[4861]: E0219 15:04:08.979285 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:04:11 crc kubenswrapper[4861]: I0219 15:04:11.048555 4861 scope.go:117] "RemoveContainer" containerID="d7c4a2d52b0f8c459e7e14e6c574152c928e10e817ad4297e8a4d413e949a03b" Feb 19 15:04:11 crc kubenswrapper[4861]: I0219 15:04:11.081973 4861 scope.go:117] "RemoveContainer" containerID="2cb8211f5016fd26eb2973f02b744cfcf5c6374b3c671d752b6abdb71de69d84" Feb 19 15:04:11 crc kubenswrapper[4861]: I0219 15:04:11.107858 4861 scope.go:117] "RemoveContainer" containerID="7e6f8cf75a10036900ef8e86f3b00305301f6e33796b7765571f13a51755c47e" Feb 19 15:04:11 crc kubenswrapper[4861]: I0219 15:04:11.154642 4861 scope.go:117] "RemoveContainer" containerID="f315601e284bdf42aa51a4506ac0e7901431744a102d0e398f4cd063833bd323" Feb 19 15:04:11 crc kubenswrapper[4861]: I0219 15:04:11.182724 4861 scope.go:117] "RemoveContainer" containerID="624fe9f6d5c60206ba585267e7412f88cf73bf40bd6e666f4771a9acb6dcdf12" Feb 19 15:04:11 crc kubenswrapper[4861]: I0219 15:04:11.204266 4861 scope.go:117] "RemoveContainer" containerID="fbe8a71debf10c1954d02b3422fee354ee1ce29f9ca66436882d019ecf1f0846" Feb 19 15:04:13 crc kubenswrapper[4861]: I0219 15:04:13.035048 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gp2tt"] Feb 19 15:04:13 crc kubenswrapper[4861]: I0219 15:04:13.042652 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gp2tt"] Feb 19 15:04:13 crc kubenswrapper[4861]: I0219 15:04:13.993102 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66448e4-088d-4aa6-98cc-ada0cda02ea6" path="/var/lib/kubelet/pods/d66448e4-088d-4aa6-98cc-ada0cda02ea6/volumes" Feb 19 15:04:20 crc kubenswrapper[4861]: I0219 15:04:20.977808 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:04:20 crc kubenswrapper[4861]: E0219 15:04:20.978933 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:04:31 crc kubenswrapper[4861]: I0219 15:04:31.977974 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:04:31 crc kubenswrapper[4861]: E0219 15:04:31.979925 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.808307 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pwnxp"] Feb 19 15:04:37 crc kubenswrapper[4861]: E0219 15:04:37.809535 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="extract-utilities" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.809557 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="extract-utilities" Feb 19 15:04:37 crc kubenswrapper[4861]: E0219 15:04:37.809623 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="extract-content" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.809638 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="extract-content" Feb 19 15:04:37 crc kubenswrapper[4861]: E0219 15:04:37.809678 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="registry-server" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.809691 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="registry-server" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.810035 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="17eeb4a6-a7f3-449f-b500-a8f81cd12136" containerName="registry-server" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.812727 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.822754 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pwnxp"] Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.914671 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-utilities\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.915075 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqnw\" (UniqueName: \"kubernetes.io/projected/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-kube-api-access-tvqnw\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:37 crc kubenswrapper[4861]: I0219 15:04:37.915173 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-catalog-content\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.016854 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-utilities\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.016941 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqnw\" (UniqueName: \"kubernetes.io/projected/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-kube-api-access-tvqnw\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.016999 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-catalog-content\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.017322 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-utilities\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.017442 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-catalog-content\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.037133 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqnw\" (UniqueName: \"kubernetes.io/projected/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-kube-api-access-tvqnw\") pod \"certified-operators-pwnxp\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.176576 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.705059 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pwnxp"] Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.932724 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerStarted","Data":"76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd"} Feb 19 15:04:38 crc kubenswrapper[4861]: I0219 15:04:38.933070 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerStarted","Data":"1b88ffb765b778c53962383fc73de04b58a0bb13c7e974dc39fa5e636bea840c"} Feb 19 15:04:39 crc kubenswrapper[4861]: I0219 15:04:39.947464 4861 generic.go:334] "Generic (PLEG): container finished" podID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerID="76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd" exitCode=0 Feb 19 15:04:39 crc kubenswrapper[4861]: I0219 15:04:39.947575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerDied","Data":"76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd"} Feb 19 15:04:40 crc kubenswrapper[4861]: I0219 15:04:40.962995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerStarted","Data":"24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993"} Feb 19 15:04:42 crc kubenswrapper[4861]: I0219 15:04:42.984271 4861 generic.go:334] "Generic (PLEG): container finished" podID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerID="24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993" exitCode=0 Feb 19 15:04:42 crc kubenswrapper[4861]: I0219 15:04:42.984509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerDied","Data":"24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993"} Feb 19 15:04:44 crc kubenswrapper[4861]: I0219 15:04:44.022888 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerStarted","Data":"393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63"} Feb 19 15:04:44 crc kubenswrapper[4861]: I0219 15:04:44.072515 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pwnxp" podStartSLOduration=3.51175383 podStartE2EDuration="7.072493393s" podCreationTimestamp="2026-02-19 15:04:37 +0000 UTC" firstStartedPulling="2026-02-19 15:04:39.949994838 +0000 UTC m=+6894.611098076" lastFinishedPulling="2026-02-19 15:04:43.510734411 +0000 UTC m=+6898.171837639" observedRunningTime="2026-02-19 15:04:44.050072621 +0000 UTC m=+6898.711175889" watchObservedRunningTime="2026-02-19 15:04:44.072493393 +0000 UTC m=+6898.733596641" Feb 19 15:04:44 crc kubenswrapper[4861]: I0219 15:04:44.979851 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:04:44 crc kubenswrapper[4861]: E0219 15:04:44.980209 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:04:48 crc kubenswrapper[4861]: I0219 15:04:48.177344 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:48 crc kubenswrapper[4861]: I0219 15:04:48.178087 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:49 crc kubenswrapper[4861]: I0219 15:04:49.275048 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pwnxp" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="registry-server" probeResult="failure" output=< Feb 19 15:04:49 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:04:49 crc kubenswrapper[4861]: > Feb 19 15:04:55 crc kubenswrapper[4861]: I0219 15:04:55.991044 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:04:55 crc kubenswrapper[4861]: E0219 15:04:55.992478 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:04:57 crc kubenswrapper[4861]: I0219 15:04:57.915690 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-29r42"] Feb 19 15:04:57 crc kubenswrapper[4861]: I0219 15:04:57.918615 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:57 crc kubenswrapper[4861]: I0219 15:04:57.949966 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29r42"] Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.018209 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvlm\" (UniqueName: \"kubernetes.io/projected/179d66cb-adf3-41ad-bb73-b22754e7b0eb-kube-api-access-tqvlm\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.018435 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-utilities\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.018488 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-catalog-content\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.120743 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-utilities\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.120899 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-catalog-content\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.120981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvlm\" (UniqueName: \"kubernetes.io/projected/179d66cb-adf3-41ad-bb73-b22754e7b0eb-kube-api-access-tqvlm\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.121158 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-utilities\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.121611 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-catalog-content\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.141541 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvlm\" (UniqueName: \"kubernetes.io/projected/179d66cb-adf3-41ad-bb73-b22754e7b0eb-kube-api-access-tqvlm\") pod \"community-operators-29r42\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.238241 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.293489 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.313014 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:04:58 crc kubenswrapper[4861]: I0219 15:04:58.802786 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29r42"] Feb 19 15:04:59 crc kubenswrapper[4861]: I0219 15:04:59.233928 4861 generic.go:334] "Generic (PLEG): container finished" podID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerID="680dfda4b08cbbf460cf1428433bd7215c6fce6df52b72a0a6982e2172611017" exitCode=0 Feb 19 15:04:59 crc kubenswrapper[4861]: I0219 15:04:59.234563 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerDied","Data":"680dfda4b08cbbf460cf1428433bd7215c6fce6df52b72a0a6982e2172611017"} Feb 19 15:04:59 crc kubenswrapper[4861]: I0219 15:04:59.234626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerStarted","Data":"b274dcb467fdffb09894e06cbfbed5adc9883c6523ffb6777b7aae4fbdd72134"} Feb 19 15:05:00 crc kubenswrapper[4861]: I0219 15:05:00.258697 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerStarted","Data":"8c55f539fb5127b836cc8060ed30e9bb7d8adf954b0511d412ef3a1ecef1f31f"} Feb 19 15:05:00 crc kubenswrapper[4861]: I0219 15:05:00.511252 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pwnxp"] Feb 19 15:05:00 crc kubenswrapper[4861]: I0219 15:05:00.511856 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pwnxp" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="registry-server" containerID="cri-o://393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63" gracePeriod=2 Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.000068 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.133385 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-utilities\") pod \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.133452 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvqnw\" (UniqueName: \"kubernetes.io/projected/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-kube-api-access-tvqnw\") pod \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.133525 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-catalog-content\") pod \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\" (UID: \"9ca07f8d-d41e-4c49-92db-15c0b3fc111e\") " Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.134078 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-utilities" (OuterVolumeSpecName: "utilities") pod "9ca07f8d-d41e-4c49-92db-15c0b3fc111e" (UID: "9ca07f8d-d41e-4c49-92db-15c0b3fc111e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.142938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-kube-api-access-tvqnw" (OuterVolumeSpecName: "kube-api-access-tvqnw") pod "9ca07f8d-d41e-4c49-92db-15c0b3fc111e" (UID: "9ca07f8d-d41e-4c49-92db-15c0b3fc111e"). InnerVolumeSpecName "kube-api-access-tvqnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.179765 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ca07f8d-d41e-4c49-92db-15c0b3fc111e" (UID: "9ca07f8d-d41e-4c49-92db-15c0b3fc111e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.235951 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.235988 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvqnw\" (UniqueName: \"kubernetes.io/projected/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-kube-api-access-tvqnw\") on node \"crc\" DevicePath \"\"" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.236002 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca07f8d-d41e-4c49-92db-15c0b3fc111e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.269701 4861 generic.go:334] "Generic (PLEG): container finished" podID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerID="8c55f539fb5127b836cc8060ed30e9bb7d8adf954b0511d412ef3a1ecef1f31f" exitCode=0 Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.269775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerDied","Data":"8c55f539fb5127b836cc8060ed30e9bb7d8adf954b0511d412ef3a1ecef1f31f"} Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.273594 4861 generic.go:334] "Generic (PLEG): container finished" podID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerID="393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63" exitCode=0 Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.273641 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerDied","Data":"393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63"} Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.273663 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwnxp" event={"ID":"9ca07f8d-d41e-4c49-92db-15c0b3fc111e","Type":"ContainerDied","Data":"1b88ffb765b778c53962383fc73de04b58a0bb13c7e974dc39fa5e636bea840c"} Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.273682 4861 scope.go:117] "RemoveContainer" containerID="393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.273681 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwnxp" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.340552 4861 scope.go:117] "RemoveContainer" containerID="24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.358237 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pwnxp"] Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.375133 4861 scope.go:117] "RemoveContainer" containerID="76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.378556 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pwnxp"] Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.418733 4861 scope.go:117] "RemoveContainer" containerID="393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63" Feb 19 15:05:01 crc kubenswrapper[4861]: E0219 15:05:01.419076 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63\": container with ID starting with 393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63 not found: ID does not exist" containerID="393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.419116 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63"} err="failed to get container status \"393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63\": rpc error: code = NotFound desc = could not find container \"393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63\": container with ID starting with 393b85edb05627e79434c8a9902b9d3625bf79f3fb23daad3c7076c351c18e63 not found: ID does not exist" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.419145 4861 scope.go:117] "RemoveContainer" containerID="24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993" Feb 19 15:05:01 crc kubenswrapper[4861]: E0219 15:05:01.419631 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993\": container with ID starting with 24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993 not found: ID does not exist" containerID="24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.419682 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993"} err="failed to get container status \"24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993\": rpc error: code = NotFound desc = could not find container \"24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993\": container with ID starting with 24ee02c3664cdb38550756e73efad98ea1a470c887189916895dbb3468835993 not found: ID does not exist" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.419716 4861 scope.go:117] "RemoveContainer" containerID="76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd" Feb 19 15:05:01 crc kubenswrapper[4861]: E0219 15:05:01.420043 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd\": container with ID starting with 76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd not found: ID does not exist" containerID="76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.420076 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd"} err="failed to get container status \"76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd\": rpc error: code = NotFound desc = could not find container \"76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd\": container with ID starting with 76ccd32690bdaf096beb4a6da2f97aced997c55b080c7a8c9669b03d7ef03abd not found: ID does not exist" Feb 19 15:05:01 crc kubenswrapper[4861]: I0219 15:05:01.995397 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" path="/var/lib/kubelet/pods/9ca07f8d-d41e-4c49-92db-15c0b3fc111e/volumes" Feb 19 15:05:02 crc kubenswrapper[4861]: I0219 15:05:02.284719 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerStarted","Data":"c9f110d8ee03466da4c33bc769fe93f6fda3153e76fde8c943eacb5ead3ec7e5"} Feb 19 15:05:02 crc kubenswrapper[4861]: I0219 15:05:02.316887 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-29r42" podStartSLOduration=2.854982338 podStartE2EDuration="5.31686221s" podCreationTimestamp="2026-02-19 15:04:57 +0000 UTC" firstStartedPulling="2026-02-19 15:04:59.236015049 +0000 UTC m=+6913.897118287" lastFinishedPulling="2026-02-19 15:05:01.697894921 +0000 UTC m=+6916.358998159" observedRunningTime="2026-02-19 15:05:02.305901035 +0000 UTC m=+6916.967004273" watchObservedRunningTime="2026-02-19 15:05:02.31686221 +0000 UTC m=+6916.977965448" Feb 19 15:05:08 crc kubenswrapper[4861]: I0219 15:05:08.294122 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:05:08 crc kubenswrapper[4861]: I0219 15:05:08.294639 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:05:08 crc kubenswrapper[4861]: I0219 15:05:08.385643 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:05:08 crc kubenswrapper[4861]: I0219 15:05:08.464211 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:05:08 crc kubenswrapper[4861]: I0219 15:05:08.979658 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:05:08 crc kubenswrapper[4861]: E0219 15:05:08.980527 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:05:09 crc kubenswrapper[4861]: I0219 15:05:09.029003 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-29r42"] Feb 19 15:05:10 crc kubenswrapper[4861]: I0219 15:05:10.369073 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-29r42" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="registry-server" containerID="cri-o://c9f110d8ee03466da4c33bc769fe93f6fda3153e76fde8c943eacb5ead3ec7e5" gracePeriod=2 Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.354042 4861 scope.go:117] "RemoveContainer" containerID="1e5b4767143a260830797f4fc9029ccdd70c67939c5d6234fd1e5304a50a4612" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.380750 4861 generic.go:334] "Generic (PLEG): container finished" podID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerID="c9f110d8ee03466da4c33bc769fe93f6fda3153e76fde8c943eacb5ead3ec7e5" exitCode=0 Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.380800 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerDied","Data":"c9f110d8ee03466da4c33bc769fe93f6fda3153e76fde8c943eacb5ead3ec7e5"} Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.380831 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29r42" event={"ID":"179d66cb-adf3-41ad-bb73-b22754e7b0eb","Type":"ContainerDied","Data":"b274dcb467fdffb09894e06cbfbed5adc9883c6523ffb6777b7aae4fbdd72134"} Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.380844 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b274dcb467fdffb09894e06cbfbed5adc9883c6523ffb6777b7aae4fbdd72134" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.451405 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.599027 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-utilities\") pod \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.599161 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqvlm\" (UniqueName: \"kubernetes.io/projected/179d66cb-adf3-41ad-bb73-b22754e7b0eb-kube-api-access-tqvlm\") pod \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.599186 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-catalog-content\") pod \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\" (UID: \"179d66cb-adf3-41ad-bb73-b22754e7b0eb\") " Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.600075 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-utilities" (OuterVolumeSpecName: "utilities") pod "179d66cb-adf3-41ad-bb73-b22754e7b0eb" (UID: "179d66cb-adf3-41ad-bb73-b22754e7b0eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.605636 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179d66cb-adf3-41ad-bb73-b22754e7b0eb-kube-api-access-tqvlm" (OuterVolumeSpecName: "kube-api-access-tqvlm") pod "179d66cb-adf3-41ad-bb73-b22754e7b0eb" (UID: "179d66cb-adf3-41ad-bb73-b22754e7b0eb"). InnerVolumeSpecName "kube-api-access-tqvlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.674320 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "179d66cb-adf3-41ad-bb73-b22754e7b0eb" (UID: "179d66cb-adf3-41ad-bb73-b22754e7b0eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.702508 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.702555 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqvlm\" (UniqueName: \"kubernetes.io/projected/179d66cb-adf3-41ad-bb73-b22754e7b0eb-kube-api-access-tqvlm\") on node \"crc\" DevicePath \"\"" Feb 19 15:05:11 crc kubenswrapper[4861]: I0219 15:05:11.702573 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179d66cb-adf3-41ad-bb73-b22754e7b0eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:05:12 crc kubenswrapper[4861]: I0219 15:05:12.390312 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29r42" Feb 19 15:05:12 crc kubenswrapper[4861]: I0219 15:05:12.419953 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-29r42"] Feb 19 15:05:12 crc kubenswrapper[4861]: I0219 15:05:12.429488 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-29r42"] Feb 19 15:05:13 crc kubenswrapper[4861]: I0219 15:05:13.994667 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" path="/var/lib/kubelet/pods/179d66cb-adf3-41ad-bb73-b22754e7b0eb/volumes" Feb 19 15:05:20 crc kubenswrapper[4861]: I0219 15:05:20.978172 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:05:20 crc kubenswrapper[4861]: E0219 15:05:20.979329 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:05:34 crc kubenswrapper[4861]: I0219 15:05:34.977264 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:05:34 crc kubenswrapper[4861]: E0219 15:05:34.979202 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:05:49 crc kubenswrapper[4861]: I0219 15:05:49.978642 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:05:49 crc kubenswrapper[4861]: E0219 15:05:49.980087 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:06:00 crc kubenswrapper[4861]: I0219 15:06:00.977668 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:06:00 crc kubenswrapper[4861]: E0219 15:06:00.978706 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.963203 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knzcv"] Feb 19 15:06:02 crc kubenswrapper[4861]: E0219 15:06:02.964137 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="extract-utilities" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964159 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="extract-utilities" Feb 19 15:06:02 crc kubenswrapper[4861]: E0219 15:06:02.964180 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="extract-content" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964191 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="extract-content" Feb 19 15:06:02 crc kubenswrapper[4861]: E0219 15:06:02.964218 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="registry-server" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964229 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="registry-server" Feb 19 15:06:02 crc kubenswrapper[4861]: E0219 15:06:02.964247 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="extract-utilities" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964257 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="extract-utilities" Feb 19 15:06:02 crc kubenswrapper[4861]: E0219 15:06:02.964280 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="extract-content" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964290 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="extract-content" Feb 19 15:06:02 crc kubenswrapper[4861]: E0219 15:06:02.964310 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="registry-server" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964320 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="registry-server" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964648 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="179d66cb-adf3-41ad-bb73-b22754e7b0eb" containerName="registry-server" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.964699 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca07f8d-d41e-4c49-92db-15c0b3fc111e" containerName="registry-server" Feb 19 15:06:02 crc kubenswrapper[4861]: I0219 15:06:02.966986 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.038356 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knzcv"] Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.123559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvv2r\" (UniqueName: \"kubernetes.io/projected/35f192ec-06ed-4074-9a2e-46c307d49672-kube-api-access-jvv2r\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.123801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-utilities\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.124442 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-catalog-content\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.226447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvv2r\" (UniqueName: \"kubernetes.io/projected/35f192ec-06ed-4074-9a2e-46c307d49672-kube-api-access-jvv2r\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.226526 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-utilities\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.226701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-catalog-content\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.227264 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-catalog-content\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.227687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-utilities\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.254473 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvv2r\" (UniqueName: \"kubernetes.io/projected/35f192ec-06ed-4074-9a2e-46c307d49672-kube-api-access-jvv2r\") pod \"redhat-operators-knzcv\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.324313 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:03 crc kubenswrapper[4861]: I0219 15:06:03.831180 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knzcv"] Feb 19 15:06:04 crc kubenswrapper[4861]: I0219 15:06:04.003641 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerStarted","Data":"968713d887dddbb226ed52f77182f7ce1cb472682e51d9bc0e15a974448e2bca"} Feb 19 15:06:05 crc kubenswrapper[4861]: I0219 15:06:05.020356 4861 generic.go:334] "Generic (PLEG): container finished" podID="35f192ec-06ed-4074-9a2e-46c307d49672" containerID="c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905" exitCode=0 Feb 19 15:06:05 crc kubenswrapper[4861]: I0219 15:06:05.020470 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerDied","Data":"c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905"} Feb 19 15:06:07 crc kubenswrapper[4861]: I0219 15:06:07.074087 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerStarted","Data":"3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346"} Feb 19 15:06:11 crc kubenswrapper[4861]: I0219 15:06:11.121368 4861 generic.go:334] "Generic (PLEG): container finished" podID="35f192ec-06ed-4074-9a2e-46c307d49672" containerID="3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346" exitCode=0 Feb 19 15:06:11 crc kubenswrapper[4861]: I0219 15:06:11.121496 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerDied","Data":"3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346"} Feb 19 15:06:12 crc kubenswrapper[4861]: I0219 15:06:12.133619 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerStarted","Data":"d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5"} Feb 19 15:06:12 crc kubenswrapper[4861]: I0219 15:06:12.159786 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knzcv" podStartSLOduration=3.416912163 podStartE2EDuration="10.159768318s" podCreationTimestamp="2026-02-19 15:06:02 +0000 UTC" firstStartedPulling="2026-02-19 15:06:05.024108221 +0000 UTC m=+6979.685211479" lastFinishedPulling="2026-02-19 15:06:11.766964406 +0000 UTC m=+6986.428067634" observedRunningTime="2026-02-19 15:06:12.156198603 +0000 UTC m=+6986.817301831" watchObservedRunningTime="2026-02-19 15:06:12.159768318 +0000 UTC m=+6986.820871546" Feb 19 15:06:13 crc kubenswrapper[4861]: I0219 15:06:13.325513 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:13 crc kubenswrapper[4861]: I0219 15:06:13.325962 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:13 crc kubenswrapper[4861]: I0219 15:06:13.977482 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:06:13 crc kubenswrapper[4861]: E0219 15:06:13.977968 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:06:14 crc kubenswrapper[4861]: I0219 15:06:14.393414 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knzcv" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="registry-server" probeResult="failure" output=< Feb 19 15:06:14 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:06:14 crc kubenswrapper[4861]: > Feb 19 15:06:23 crc kubenswrapper[4861]: I0219 15:06:23.425268 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:23 crc kubenswrapper[4861]: I0219 15:06:23.502719 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:23 crc kubenswrapper[4861]: I0219 15:06:23.677158 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knzcv"] Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.304818 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-knzcv" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="registry-server" containerID="cri-o://d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5" gracePeriod=2 Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.852616 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.902227 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvv2r\" (UniqueName: \"kubernetes.io/projected/35f192ec-06ed-4074-9a2e-46c307d49672-kube-api-access-jvv2r\") pod \"35f192ec-06ed-4074-9a2e-46c307d49672\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.902506 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-catalog-content\") pod \"35f192ec-06ed-4074-9a2e-46c307d49672\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.902614 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-utilities\") pod \"35f192ec-06ed-4074-9a2e-46c307d49672\" (UID: \"35f192ec-06ed-4074-9a2e-46c307d49672\") " Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.903701 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-utilities" (OuterVolumeSpecName: "utilities") pod "35f192ec-06ed-4074-9a2e-46c307d49672" (UID: "35f192ec-06ed-4074-9a2e-46c307d49672"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:06:25 crc kubenswrapper[4861]: I0219 15:06:25.912628 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f192ec-06ed-4074-9a2e-46c307d49672-kube-api-access-jvv2r" (OuterVolumeSpecName: "kube-api-access-jvv2r") pod "35f192ec-06ed-4074-9a2e-46c307d49672" (UID: "35f192ec-06ed-4074-9a2e-46c307d49672"). InnerVolumeSpecName "kube-api-access-jvv2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.004268 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.004297 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvv2r\" (UniqueName: \"kubernetes.io/projected/35f192ec-06ed-4074-9a2e-46c307d49672-kube-api-access-jvv2r\") on node \"crc\" DevicePath \"\"" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.061857 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35f192ec-06ed-4074-9a2e-46c307d49672" (UID: "35f192ec-06ed-4074-9a2e-46c307d49672"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.107481 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f192ec-06ed-4074-9a2e-46c307d49672-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.317575 4861 generic.go:334] "Generic (PLEG): container finished" podID="35f192ec-06ed-4074-9a2e-46c307d49672" containerID="d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5" exitCode=0 Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.317645 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerDied","Data":"d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5"} Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.317679 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knzcv" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.317722 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knzcv" event={"ID":"35f192ec-06ed-4074-9a2e-46c307d49672","Type":"ContainerDied","Data":"968713d887dddbb226ed52f77182f7ce1cb472682e51d9bc0e15a974448e2bca"} Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.317747 4861 scope.go:117] "RemoveContainer" containerID="d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.343152 4861 scope.go:117] "RemoveContainer" containerID="3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.370112 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knzcv"] Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.381954 4861 scope.go:117] "RemoveContainer" containerID="c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.387851 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-knzcv"] Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.438415 4861 scope.go:117] "RemoveContainer" containerID="d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5" Feb 19 15:06:26 crc kubenswrapper[4861]: E0219 15:06:26.439118 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5\": container with ID starting with d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5 not found: ID does not exist" containerID="d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.439177 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5"} err="failed to get container status \"d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5\": rpc error: code = NotFound desc = could not find container \"d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5\": container with ID starting with d346233c7778ee35c9b4fa665f8df167909b6fc0fdb1de75257fa2d87d450ae5 not found: ID does not exist" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.439216 4861 scope.go:117] "RemoveContainer" containerID="3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346" Feb 19 15:06:26 crc kubenswrapper[4861]: E0219 15:06:26.439764 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346\": container with ID starting with 3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346 not found: ID does not exist" containerID="3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.439792 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346"} err="failed to get container status \"3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346\": rpc error: code = NotFound desc = could not find container \"3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346\": container with ID starting with 3ddc9fd7a2b400d724b71506f87710964deafc1843f1e637e080200f84e57346 not found: ID does not exist" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.439808 4861 scope.go:117] "RemoveContainer" containerID="c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905" Feb 19 15:06:26 crc kubenswrapper[4861]: E0219 15:06:26.440292 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905\": container with ID starting with c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905 not found: ID does not exist" containerID="c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.440332 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905"} err="failed to get container status \"c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905\": rpc error: code = NotFound desc = could not find container \"c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905\": container with ID starting with c82e63aa340572cd99288bde1c82538f29754e6db5623c4300e6a43b509d3905 not found: ID does not exist" Feb 19 15:06:26 crc kubenswrapper[4861]: I0219 15:06:26.977557 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:06:26 crc kubenswrapper[4861]: E0219 15:06:26.978118 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:06:27 crc kubenswrapper[4861]: I0219 15:06:27.997864 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" path="/var/lib/kubelet/pods/35f192ec-06ed-4074-9a2e-46c307d49672/volumes" Feb 19 15:06:38 crc kubenswrapper[4861]: I0219 15:06:38.978076 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:06:38 crc kubenswrapper[4861]: E0219 15:06:38.979376 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:06:50 crc kubenswrapper[4861]: I0219 15:06:50.077474 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-a68c-account-create-update-vd6td"] Feb 19 15:06:50 crc kubenswrapper[4861]: I0219 15:06:50.088375 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-a68c-account-create-update-vd6td"] Feb 19 15:06:50 crc kubenswrapper[4861]: I0219 15:06:50.100732 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-cbxqx"] Feb 19 15:06:50 crc kubenswrapper[4861]: I0219 15:06:50.110482 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-cbxqx"] Feb 19 15:06:52 crc kubenswrapper[4861]: I0219 15:06:52.006058 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86eead69-a24a-41d4-bbf2-289a6e72989d" path="/var/lib/kubelet/pods/86eead69-a24a-41d4-bbf2-289a6e72989d/volumes" Feb 19 15:06:52 crc kubenswrapper[4861]: I0219 15:06:52.007229 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e8c69d-e2be-44a2-a761-4074f11733e8" path="/var/lib/kubelet/pods/e9e8c69d-e2be-44a2-a761-4074f11733e8/volumes" Feb 19 15:06:53 crc kubenswrapper[4861]: I0219 15:06:53.977986 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:06:53 crc kubenswrapper[4861]: E0219 15:06:53.978926 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:07:01 crc kubenswrapper[4861]: I0219 15:07:01.044317 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pp8gl"] Feb 19 15:07:01 crc kubenswrapper[4861]: I0219 15:07:01.055733 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pp8gl"] Feb 19 15:07:01 crc kubenswrapper[4861]: I0219 15:07:01.995267 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f25c96c-cf2f-4173-b839-e33d5412655a" path="/var/lib/kubelet/pods/8f25c96c-cf2f-4173-b839-e33d5412655a/volumes" Feb 19 15:07:07 crc kubenswrapper[4861]: I0219 15:07:07.978350 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:07:08 crc kubenswrapper[4861]: I0219 15:07:08.845065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"65e1af2814c1ba8f7c75ea9cf265036304cf5dff40f9e020a1705606c3cb3998"} Feb 19 15:07:11 crc kubenswrapper[4861]: I0219 15:07:11.495440 4861 scope.go:117] "RemoveContainer" containerID="fb12d152bfb7fb3c55b381de183919436669e54a010938ad02aa4132063ec029" Feb 19 15:07:11 crc kubenswrapper[4861]: I0219 15:07:11.536462 4861 scope.go:117] "RemoveContainer" containerID="67ff0773d72808d83c4918cee149618c7b1d982c2df2d5bac909940cc15cae4d" Feb 19 15:07:11 crc kubenswrapper[4861]: I0219 15:07:11.611197 4861 scope.go:117] "RemoveContainer" containerID="4b187caedf1aa9ba364cc2d66744b4ca466774cd1efc57147800761f31bdb4b9" Feb 19 15:09:33 crc kubenswrapper[4861]: I0219 15:09:33.834002 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:09:33 crc kubenswrapper[4861]: I0219 15:09:33.834627 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:10:03 crc kubenswrapper[4861]: I0219 15:10:03.834600 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:10:03 crc kubenswrapper[4861]: I0219 15:10:03.835250 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:10:11 crc kubenswrapper[4861]: I0219 15:10:11.025474 4861 generic.go:334] "Generic (PLEG): container finished" podID="6ef1f749-73bc-4049-ba56-e022f58ca9d9" containerID="c37052c6cd56fd78a35c79a7b270d9caa18d6fb4159c74dd5b9e39dac705deda" exitCode=0 Feb 19 15:10:11 crc kubenswrapper[4861]: I0219 15:10:11.025574 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" event={"ID":"6ef1f749-73bc-4049-ba56-e022f58ca9d9","Type":"ContainerDied","Data":"c37052c6cd56fd78a35c79a7b270d9caa18d6fb4159c74dd5b9e39dac705deda"} Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.589386 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.782650 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-inventory\") pod \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.782825 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvpjt\" (UniqueName: \"kubernetes.io/projected/6ef1f749-73bc-4049-ba56-e022f58ca9d9-kube-api-access-wvpjt\") pod \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.782951 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-tripleo-cleanup-combined-ca-bundle\") pod \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.783023 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-ssh-key-openstack-cell1\") pod \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\" (UID: \"6ef1f749-73bc-4049-ba56-e022f58ca9d9\") " Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.790874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "6ef1f749-73bc-4049-ba56-e022f58ca9d9" (UID: "6ef1f749-73bc-4049-ba56-e022f58ca9d9"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.792142 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef1f749-73bc-4049-ba56-e022f58ca9d9-kube-api-access-wvpjt" (OuterVolumeSpecName: "kube-api-access-wvpjt") pod "6ef1f749-73bc-4049-ba56-e022f58ca9d9" (UID: "6ef1f749-73bc-4049-ba56-e022f58ca9d9"). InnerVolumeSpecName "kube-api-access-wvpjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.822021 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6ef1f749-73bc-4049-ba56-e022f58ca9d9" (UID: "6ef1f749-73bc-4049-ba56-e022f58ca9d9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.837458 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-inventory" (OuterVolumeSpecName: "inventory") pod "6ef1f749-73bc-4049-ba56-e022f58ca9d9" (UID: "6ef1f749-73bc-4049-ba56-e022f58ca9d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.884542 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.884574 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvpjt\" (UniqueName: \"kubernetes.io/projected/6ef1f749-73bc-4049-ba56-e022f58ca9d9-kube-api-access-wvpjt\") on node \"crc\" DevicePath \"\"" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.884587 4861 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:10:12 crc kubenswrapper[4861]: I0219 15:10:12.884596 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6ef1f749-73bc-4049-ba56-e022f58ca9d9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:10:13 crc kubenswrapper[4861]: I0219 15:10:13.059985 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" event={"ID":"6ef1f749-73bc-4049-ba56-e022f58ca9d9","Type":"ContainerDied","Data":"c7320322cfb1b0744cf48d77b54d43ecb18f531db357f931289d1273fd66ca6c"} Feb 19 15:10:13 crc kubenswrapper[4861]: I0219 15:10:13.060052 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7320322cfb1b0744cf48d77b54d43ecb18f531db357f931289d1273fd66ca6c" Feb 19 15:10:13 crc kubenswrapper[4861]: I0219 15:10:13.060410 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.302233 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-x6tgq"] Feb 19 15:10:26 crc kubenswrapper[4861]: E0219 15:10:26.303576 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="extract-utilities" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.303602 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="extract-utilities" Feb 19 15:10:26 crc kubenswrapper[4861]: E0219 15:10:26.303662 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef1f749-73bc-4049-ba56-e022f58ca9d9" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.303681 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef1f749-73bc-4049-ba56-e022f58ca9d9" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 15:10:26 crc kubenswrapper[4861]: E0219 15:10:26.303710 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="extract-content" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.303726 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="extract-content" Feb 19 15:10:26 crc kubenswrapper[4861]: E0219 15:10:26.303770 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="registry-server" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.303786 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="registry-server" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.304154 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef1f749-73bc-4049-ba56-e022f58ca9d9" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.304188 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f192ec-06ed-4074-9a2e-46c307d49672" containerName="registry-server" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.305227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.308039 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.308150 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.308341 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.309246 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.317637 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-x6tgq"] Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.478809 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.478903 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-inventory\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.479164 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.479794 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhvc\" (UniqueName: \"kubernetes.io/projected/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-kube-api-access-7dhvc\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.582408 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.582541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-inventory\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.582624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.582819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dhvc\" (UniqueName: \"kubernetes.io/projected/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-kube-api-access-7dhvc\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.590002 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-inventory\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.590316 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.600397 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.616638 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dhvc\" (UniqueName: \"kubernetes.io/projected/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-kube-api-access-7dhvc\") pod \"bootstrap-openstack-openstack-cell1-x6tgq\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:26 crc kubenswrapper[4861]: I0219 15:10:26.668211 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:10:27 crc kubenswrapper[4861]: I0219 15:10:27.215213 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-x6tgq"] Feb 19 15:10:27 crc kubenswrapper[4861]: I0219 15:10:27.241062 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:10:28 crc kubenswrapper[4861]: I0219 15:10:28.257334 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" event={"ID":"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808","Type":"ContainerStarted","Data":"f24d973c847c164190ad6619bc5f26b4946b59aa6262da8f0d54d3d118881553"} Feb 19 15:10:28 crc kubenswrapper[4861]: I0219 15:10:28.257804 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" event={"ID":"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808","Type":"ContainerStarted","Data":"2ee749e20dfcf453748c18d14bd80df6c59c59482cd2013e12986c0446a4c029"} Feb 19 15:10:28 crc kubenswrapper[4861]: I0219 15:10:28.282505 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" podStartSLOduration=1.912645055 podStartE2EDuration="2.282490364s" podCreationTimestamp="2026-02-19 15:10:26 +0000 UTC" firstStartedPulling="2026-02-19 15:10:27.24080045 +0000 UTC m=+7241.901903688" lastFinishedPulling="2026-02-19 15:10:27.610645749 +0000 UTC m=+7242.271748997" observedRunningTime="2026-02-19 15:10:28.279618828 +0000 UTC m=+7242.940722066" watchObservedRunningTime="2026-02-19 15:10:28.282490364 +0000 UTC m=+7242.943593592" Feb 19 15:10:33 crc kubenswrapper[4861]: I0219 15:10:33.834155 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:10:33 crc kubenswrapper[4861]: I0219 15:10:33.834502 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:10:33 crc kubenswrapper[4861]: I0219 15:10:33.834570 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:10:33 crc kubenswrapper[4861]: I0219 15:10:33.835981 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65e1af2814c1ba8f7c75ea9cf265036304cf5dff40f9e020a1705606c3cb3998"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:10:33 crc kubenswrapper[4861]: I0219 15:10:33.836092 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://65e1af2814c1ba8f7c75ea9cf265036304cf5dff40f9e020a1705606c3cb3998" gracePeriod=600 Feb 19 15:10:34 crc kubenswrapper[4861]: I0219 15:10:34.343455 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="65e1af2814c1ba8f7c75ea9cf265036304cf5dff40f9e020a1705606c3cb3998" exitCode=0 Feb 19 15:10:34 crc kubenswrapper[4861]: I0219 15:10:34.343528 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"65e1af2814c1ba8f7c75ea9cf265036304cf5dff40f9e020a1705606c3cb3998"} Feb 19 15:10:34 crc kubenswrapper[4861]: I0219 15:10:34.344038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452"} Feb 19 15:10:34 crc kubenswrapper[4861]: I0219 15:10:34.344075 4861 scope.go:117] "RemoveContainer" containerID="57fc9127e8131615508ae362e809bc7c406ed5c389a14d0426fa90f34a09b607" Feb 19 15:11:11 crc kubenswrapper[4861]: I0219 15:11:11.852510 4861 scope.go:117] "RemoveContainer" containerID="8c55f539fb5127b836cc8060ed30e9bb7d8adf954b0511d412ef3a1ecef1f31f" Feb 19 15:11:11 crc kubenswrapper[4861]: I0219 15:11:11.907838 4861 scope.go:117] "RemoveContainer" containerID="c9f110d8ee03466da4c33bc769fe93f6fda3153e76fde8c943eacb5ead3ec7e5" Feb 19 15:11:11 crc kubenswrapper[4861]: I0219 15:11:11.951771 4861 scope.go:117] "RemoveContainer" containerID="680dfda4b08cbbf460cf1428433bd7215c6fce6df52b72a0a6982e2172611017" Feb 19 15:13:03 crc kubenswrapper[4861]: I0219 15:13:03.833823 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:13:03 crc kubenswrapper[4861]: I0219 15:13:03.834463 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:13:33 crc kubenswrapper[4861]: I0219 15:13:33.834280 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:13:33 crc kubenswrapper[4861]: I0219 15:13:33.835232 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:13:34 crc kubenswrapper[4861]: I0219 15:13:34.449352 4861 generic.go:334] "Generic (PLEG): container finished" podID="3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" containerID="f24d973c847c164190ad6619bc5f26b4946b59aa6262da8f0d54d3d118881553" exitCode=0 Feb 19 15:13:34 crc kubenswrapper[4861]: I0219 15:13:34.449486 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" event={"ID":"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808","Type":"ContainerDied","Data":"f24d973c847c164190ad6619bc5f26b4946b59aa6262da8f0d54d3d118881553"} Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.025957 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.154391 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-ssh-key-openstack-cell1\") pod \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.155013 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dhvc\" (UniqueName: \"kubernetes.io/projected/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-kube-api-access-7dhvc\") pod \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.155089 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-inventory\") pod \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.155193 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-bootstrap-combined-ca-bundle\") pod \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\" (UID: \"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808\") " Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.160906 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-kube-api-access-7dhvc" (OuterVolumeSpecName: "kube-api-access-7dhvc") pod "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" (UID: "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808"). InnerVolumeSpecName "kube-api-access-7dhvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.161321 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" (UID: "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.202529 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-inventory" (OuterVolumeSpecName: "inventory") pod "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" (UID: "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.213258 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" (UID: "3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.260307 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.260367 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dhvc\" (UniqueName: \"kubernetes.io/projected/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-kube-api-access-7dhvc\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.260392 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.260411 4861 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.480082 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" event={"ID":"3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808","Type":"ContainerDied","Data":"2ee749e20dfcf453748c18d14bd80df6c59c59482cd2013e12986c0446a4c029"} Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.480120 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee749e20dfcf453748c18d14bd80df6c59c59482cd2013e12986c0446a4c029" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.480225 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-x6tgq" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.598417 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-ntcpr"] Feb 19 15:13:36 crc kubenswrapper[4861]: E0219 15:13:36.599449 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" containerName="bootstrap-openstack-openstack-cell1" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.599557 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" containerName="bootstrap-openstack-openstack-cell1" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.600043 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808" containerName="bootstrap-openstack-openstack-cell1" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.601471 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.607493 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.608131 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.609084 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.609897 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.621150 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-ntcpr"] Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.773467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.773542 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-inventory\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.773628 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsktw\" (UniqueName: \"kubernetes.io/projected/634c282a-1ccd-4f31-a944-704e2cafa09a-kube-api-access-vsktw\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.875784 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.875850 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-inventory\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.875920 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsktw\" (UniqueName: \"kubernetes.io/projected/634c282a-1ccd-4f31-a944-704e2cafa09a-kube-api-access-vsktw\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.881283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.881281 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-inventory\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.905234 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsktw\" (UniqueName: \"kubernetes.io/projected/634c282a-1ccd-4f31-a944-704e2cafa09a-kube-api-access-vsktw\") pod \"download-cache-openstack-openstack-cell1-ntcpr\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:36 crc kubenswrapper[4861]: I0219 15:13:36.932640 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:13:37 crc kubenswrapper[4861]: I0219 15:13:37.569079 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-ntcpr"] Feb 19 15:13:37 crc kubenswrapper[4861]: W0219 15:13:37.579555 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634c282a_1ccd_4f31_a944_704e2cafa09a.slice/crio-f4b04e67d7c24f0bead6ab24b4cdc9da7ce232fdf370682e3a997b9aae0225cb WatchSource:0}: Error finding container f4b04e67d7c24f0bead6ab24b4cdc9da7ce232fdf370682e3a997b9aae0225cb: Status 404 returned error can't find the container with id f4b04e67d7c24f0bead6ab24b4cdc9da7ce232fdf370682e3a997b9aae0225cb Feb 19 15:13:38 crc kubenswrapper[4861]: I0219 15:13:38.502374 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" event={"ID":"634c282a-1ccd-4f31-a944-704e2cafa09a","Type":"ContainerStarted","Data":"cff2550a7fea94e8d88a3e03faad9e0dcd3786918716a7e87bec929f5fafa524"} Feb 19 15:13:38 crc kubenswrapper[4861]: I0219 15:13:38.502799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" event={"ID":"634c282a-1ccd-4f31-a944-704e2cafa09a","Type":"ContainerStarted","Data":"f4b04e67d7c24f0bead6ab24b4cdc9da7ce232fdf370682e3a997b9aae0225cb"} Feb 19 15:13:38 crc kubenswrapper[4861]: I0219 15:13:38.534724 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" podStartSLOduration=2.073464773 podStartE2EDuration="2.534703489s" podCreationTimestamp="2026-02-19 15:13:36 +0000 UTC" firstStartedPulling="2026-02-19 15:13:37.581946737 +0000 UTC m=+7432.243050005" lastFinishedPulling="2026-02-19 15:13:38.043185463 +0000 UTC m=+7432.704288721" observedRunningTime="2026-02-19 15:13:38.526379719 +0000 UTC m=+7433.187482957" watchObservedRunningTime="2026-02-19 15:13:38.534703489 +0000 UTC m=+7433.195806717" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.247789 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gjprv"] Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.251716 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.271121 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjprv"] Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.318571 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-utilities\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.318712 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-catalog-content\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.319262 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pn2z\" (UniqueName: \"kubernetes.io/projected/c329b3ca-6bdb-4374-8e6e-048214d2a463-kube-api-access-7pn2z\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.420804 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-utilities\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.420901 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-catalog-content\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.421073 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pn2z\" (UniqueName: \"kubernetes.io/projected/c329b3ca-6bdb-4374-8e6e-048214d2a463-kube-api-access-7pn2z\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.421980 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-catalog-content\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.422066 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-utilities\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.448465 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pn2z\" (UniqueName: \"kubernetes.io/projected/c329b3ca-6bdb-4374-8e6e-048214d2a463-kube-api-access-7pn2z\") pod \"redhat-marketplace-gjprv\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:42 crc kubenswrapper[4861]: I0219 15:13:42.579903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:43 crc kubenswrapper[4861]: I0219 15:13:43.097185 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjprv"] Feb 19 15:13:43 crc kubenswrapper[4861]: W0219 15:13:43.100030 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc329b3ca_6bdb_4374_8e6e_048214d2a463.slice/crio-47f89dacc86566f9c836149d0bdfa539f645c3f1e3c651e74819546bc8af760f WatchSource:0}: Error finding container 47f89dacc86566f9c836149d0bdfa539f645c3f1e3c651e74819546bc8af760f: Status 404 returned error can't find the container with id 47f89dacc86566f9c836149d0bdfa539f645c3f1e3c651e74819546bc8af760f Feb 19 15:13:43 crc kubenswrapper[4861]: I0219 15:13:43.561332 4861 generic.go:334] "Generic (PLEG): container finished" podID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerID="379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34" exitCode=0 Feb 19 15:13:43 crc kubenswrapper[4861]: I0219 15:13:43.561614 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjprv" event={"ID":"c329b3ca-6bdb-4374-8e6e-048214d2a463","Type":"ContainerDied","Data":"379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34"} Feb 19 15:13:43 crc kubenswrapper[4861]: I0219 15:13:43.561642 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjprv" event={"ID":"c329b3ca-6bdb-4374-8e6e-048214d2a463","Type":"ContainerStarted","Data":"47f89dacc86566f9c836149d0bdfa539f645c3f1e3c651e74819546bc8af760f"} Feb 19 15:13:45 crc kubenswrapper[4861]: I0219 15:13:45.591271 4861 generic.go:334] "Generic (PLEG): container finished" podID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerID="d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d" exitCode=0 Feb 19 15:13:45 crc kubenswrapper[4861]: I0219 15:13:45.591341 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjprv" event={"ID":"c329b3ca-6bdb-4374-8e6e-048214d2a463","Type":"ContainerDied","Data":"d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d"} Feb 19 15:13:46 crc kubenswrapper[4861]: I0219 15:13:46.609214 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjprv" event={"ID":"c329b3ca-6bdb-4374-8e6e-048214d2a463","Type":"ContainerStarted","Data":"92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca"} Feb 19 15:13:46 crc kubenswrapper[4861]: I0219 15:13:46.631963 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gjprv" podStartSLOduration=2.200337554 podStartE2EDuration="4.631940837s" podCreationTimestamp="2026-02-19 15:13:42 +0000 UTC" firstStartedPulling="2026-02-19 15:13:43.596021185 +0000 UTC m=+7438.257124413" lastFinishedPulling="2026-02-19 15:13:46.027624468 +0000 UTC m=+7440.688727696" observedRunningTime="2026-02-19 15:13:46.628618929 +0000 UTC m=+7441.289722167" watchObservedRunningTime="2026-02-19 15:13:46.631940837 +0000 UTC m=+7441.293044075" Feb 19 15:13:52 crc kubenswrapper[4861]: I0219 15:13:52.580611 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:52 crc kubenswrapper[4861]: I0219 15:13:52.581094 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:52 crc kubenswrapper[4861]: I0219 15:13:52.656874 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:52 crc kubenswrapper[4861]: I0219 15:13:52.734539 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:52 crc kubenswrapper[4861]: I0219 15:13:52.909355 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjprv"] Feb 19 15:13:54 crc kubenswrapper[4861]: I0219 15:13:54.695222 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gjprv" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="registry-server" containerID="cri-o://92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca" gracePeriod=2 Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.356004 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.529350 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pn2z\" (UniqueName: \"kubernetes.io/projected/c329b3ca-6bdb-4374-8e6e-048214d2a463-kube-api-access-7pn2z\") pod \"c329b3ca-6bdb-4374-8e6e-048214d2a463\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.529680 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-catalog-content\") pod \"c329b3ca-6bdb-4374-8e6e-048214d2a463\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.529791 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-utilities\") pod \"c329b3ca-6bdb-4374-8e6e-048214d2a463\" (UID: \"c329b3ca-6bdb-4374-8e6e-048214d2a463\") " Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.531486 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-utilities" (OuterVolumeSpecName: "utilities") pod "c329b3ca-6bdb-4374-8e6e-048214d2a463" (UID: "c329b3ca-6bdb-4374-8e6e-048214d2a463"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.535846 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c329b3ca-6bdb-4374-8e6e-048214d2a463-kube-api-access-7pn2z" (OuterVolumeSpecName: "kube-api-access-7pn2z") pod "c329b3ca-6bdb-4374-8e6e-048214d2a463" (UID: "c329b3ca-6bdb-4374-8e6e-048214d2a463"). InnerVolumeSpecName "kube-api-access-7pn2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.564410 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c329b3ca-6bdb-4374-8e6e-048214d2a463" (UID: "c329b3ca-6bdb-4374-8e6e-048214d2a463"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.633224 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.633286 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pn2z\" (UniqueName: \"kubernetes.io/projected/c329b3ca-6bdb-4374-8e6e-048214d2a463-kube-api-access-7pn2z\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.633308 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c329b3ca-6bdb-4374-8e6e-048214d2a463-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.706388 4861 generic.go:334] "Generic (PLEG): container finished" podID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerID="92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca" exitCode=0 Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.706447 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjprv" event={"ID":"c329b3ca-6bdb-4374-8e6e-048214d2a463","Type":"ContainerDied","Data":"92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca"} Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.706476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjprv" event={"ID":"c329b3ca-6bdb-4374-8e6e-048214d2a463","Type":"ContainerDied","Data":"47f89dacc86566f9c836149d0bdfa539f645c3f1e3c651e74819546bc8af760f"} Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.706494 4861 scope.go:117] "RemoveContainer" containerID="92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.706612 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjprv" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.728202 4861 scope.go:117] "RemoveContainer" containerID="d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.752063 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjprv"] Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.767889 4861 scope.go:117] "RemoveContainer" containerID="379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.775952 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjprv"] Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.832782 4861 scope.go:117] "RemoveContainer" containerID="92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca" Feb 19 15:13:55 crc kubenswrapper[4861]: E0219 15:13:55.833333 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca\": container with ID starting with 92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca not found: ID does not exist" containerID="92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.833470 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca"} err="failed to get container status \"92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca\": rpc error: code = NotFound desc = could not find container \"92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca\": container with ID starting with 92fa8487896bac06c401dbfb68ddbd726ac57d8a51e56c46629888e90d1677ca not found: ID does not exist" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.833565 4861 scope.go:117] "RemoveContainer" containerID="d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d" Feb 19 15:13:55 crc kubenswrapper[4861]: E0219 15:13:55.833991 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d\": container with ID starting with d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d not found: ID does not exist" containerID="d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.834055 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d"} err="failed to get container status \"d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d\": rpc error: code = NotFound desc = could not find container \"d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d\": container with ID starting with d7b7cc8f490c68b117d5734aa57c0a2b58e3452577898cb2d3d541f25385100d not found: ID does not exist" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.834083 4861 scope.go:117] "RemoveContainer" containerID="379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34" Feb 19 15:13:55 crc kubenswrapper[4861]: E0219 15:13:55.834444 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34\": container with ID starting with 379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34 not found: ID does not exist" containerID="379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.834575 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34"} err="failed to get container status \"379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34\": rpc error: code = NotFound desc = could not find container \"379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34\": container with ID starting with 379ee2484a569ab2b95dbe6554b3daf4f36261077bdbd5cf0b59f8cddc585a34 not found: ID does not exist" Feb 19 15:13:55 crc kubenswrapper[4861]: I0219 15:13:55.989982 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" path="/var/lib/kubelet/pods/c329b3ca-6bdb-4374-8e6e-048214d2a463/volumes" Feb 19 15:14:03 crc kubenswrapper[4861]: I0219 15:14:03.834308 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:14:03 crc kubenswrapper[4861]: I0219 15:14:03.835117 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:14:03 crc kubenswrapper[4861]: I0219 15:14:03.835184 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:14:03 crc kubenswrapper[4861]: I0219 15:14:03.836832 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:14:03 crc kubenswrapper[4861]: I0219 15:14:03.836960 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" gracePeriod=600 Feb 19 15:14:03 crc kubenswrapper[4861]: E0219 15:14:03.988583 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:14:04 crc kubenswrapper[4861]: I0219 15:14:04.850017 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" exitCode=0 Feb 19 15:14:04 crc kubenswrapper[4861]: I0219 15:14:04.850349 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452"} Feb 19 15:14:04 crc kubenswrapper[4861]: I0219 15:14:04.850453 4861 scope.go:117] "RemoveContainer" containerID="65e1af2814c1ba8f7c75ea9cf265036304cf5dff40f9e020a1705606c3cb3998" Feb 19 15:14:04 crc kubenswrapper[4861]: I0219 15:14:04.851661 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:14:04 crc kubenswrapper[4861]: E0219 15:14:04.852120 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:14:15 crc kubenswrapper[4861]: I0219 15:14:15.995638 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:14:15 crc kubenswrapper[4861]: E0219 15:14:15.996612 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:14:26 crc kubenswrapper[4861]: I0219 15:14:26.980116 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:14:26 crc kubenswrapper[4861]: E0219 15:14:26.980961 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:14:39 crc kubenswrapper[4861]: I0219 15:14:39.977565 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:14:39 crc kubenswrapper[4861]: E0219 15:14:39.980760 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:14:54 crc kubenswrapper[4861]: I0219 15:14:54.978380 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:14:54 crc kubenswrapper[4861]: E0219 15:14:54.979833 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.193540 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87"] Feb 19 15:15:00 crc kubenswrapper[4861]: E0219 15:15:00.194625 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="extract-content" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.194643 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="extract-content" Feb 19 15:15:00 crc kubenswrapper[4861]: E0219 15:15:00.194662 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="registry-server" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.194671 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="registry-server" Feb 19 15:15:00 crc kubenswrapper[4861]: E0219 15:15:00.194731 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="extract-utilities" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.194743 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="extract-utilities" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.195001 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c329b3ca-6bdb-4374-8e6e-048214d2a463" containerName="registry-server" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.195892 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.197995 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.198074 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.215399 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87"] Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.319724 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2c574eb-3877-43f5-a485-a8a9c6923aa8-secret-volume\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.320121 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqjn\" (UniqueName: \"kubernetes.io/projected/a2c574eb-3877-43f5-a485-a8a9c6923aa8-kube-api-access-hvqjn\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.320256 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c574eb-3877-43f5-a485-a8a9c6923aa8-config-volume\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.423221 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2c574eb-3877-43f5-a485-a8a9c6923aa8-secret-volume\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.423454 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqjn\" (UniqueName: \"kubernetes.io/projected/a2c574eb-3877-43f5-a485-a8a9c6923aa8-kube-api-access-hvqjn\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.423503 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c574eb-3877-43f5-a485-a8a9c6923aa8-config-volume\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.424784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c574eb-3877-43f5-a485-a8a9c6923aa8-config-volume\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.441079 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2c574eb-3877-43f5-a485-a8a9c6923aa8-secret-volume\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.448302 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqjn\" (UniqueName: \"kubernetes.io/projected/a2c574eb-3877-43f5-a485-a8a9c6923aa8-kube-api-access-hvqjn\") pod \"collect-profiles-29525235-rpb87\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:00 crc kubenswrapper[4861]: I0219 15:15:00.525688 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:01 crc kubenswrapper[4861]: I0219 15:15:01.012405 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87"] Feb 19 15:15:01 crc kubenswrapper[4861]: I0219 15:15:01.515958 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" event={"ID":"a2c574eb-3877-43f5-a485-a8a9c6923aa8","Type":"ContainerStarted","Data":"11c23f2c0c13df5d4d43abcc227da089e5b62a4a1f05356c2ee1779ae5f43f33"} Feb 19 15:15:01 crc kubenswrapper[4861]: I0219 15:15:01.517926 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" event={"ID":"a2c574eb-3877-43f5-a485-a8a9c6923aa8","Type":"ContainerStarted","Data":"d11550e16d8a4aa56856f4368182d8a5d0d15dea858713cb9950ac51f18703b5"} Feb 19 15:15:01 crc kubenswrapper[4861]: I0219 15:15:01.546731 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" podStartSLOduration=1.5467003350000001 podStartE2EDuration="1.546700335s" podCreationTimestamp="2026-02-19 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:15:01.544216479 +0000 UTC m=+7516.205319717" watchObservedRunningTime="2026-02-19 15:15:01.546700335 +0000 UTC m=+7516.207803603" Feb 19 15:15:02 crc kubenswrapper[4861]: I0219 15:15:02.533245 4861 generic.go:334] "Generic (PLEG): container finished" podID="a2c574eb-3877-43f5-a485-a8a9c6923aa8" containerID="11c23f2c0c13df5d4d43abcc227da089e5b62a4a1f05356c2ee1779ae5f43f33" exitCode=0 Feb 19 15:15:02 crc kubenswrapper[4861]: I0219 15:15:02.533323 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" event={"ID":"a2c574eb-3877-43f5-a485-a8a9c6923aa8","Type":"ContainerDied","Data":"11c23f2c0c13df5d4d43abcc227da089e5b62a4a1f05356c2ee1779ae5f43f33"} Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.033227 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.111870 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2c574eb-3877-43f5-a485-a8a9c6923aa8-secret-volume\") pod \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.112468 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvqjn\" (UniqueName: \"kubernetes.io/projected/a2c574eb-3877-43f5-a485-a8a9c6923aa8-kube-api-access-hvqjn\") pod \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.112639 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c574eb-3877-43f5-a485-a8a9c6923aa8-config-volume\") pod \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\" (UID: \"a2c574eb-3877-43f5-a485-a8a9c6923aa8\") " Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.113832 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2c574eb-3877-43f5-a485-a8a9c6923aa8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2c574eb-3877-43f5-a485-a8a9c6923aa8" (UID: "a2c574eb-3877-43f5-a485-a8a9c6923aa8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.117823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c574eb-3877-43f5-a485-a8a9c6923aa8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2c574eb-3877-43f5-a485-a8a9c6923aa8" (UID: "a2c574eb-3877-43f5-a485-a8a9c6923aa8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.119911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c574eb-3877-43f5-a485-a8a9c6923aa8-kube-api-access-hvqjn" (OuterVolumeSpecName: "kube-api-access-hvqjn") pod "a2c574eb-3877-43f5-a485-a8a9c6923aa8" (UID: "a2c574eb-3877-43f5-a485-a8a9c6923aa8"). InnerVolumeSpecName "kube-api-access-hvqjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.215686 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2c574eb-3877-43f5-a485-a8a9c6923aa8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.215731 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvqjn\" (UniqueName: \"kubernetes.io/projected/a2c574eb-3877-43f5-a485-a8a9c6923aa8-kube-api-access-hvqjn\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.215742 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c574eb-3877-43f5-a485-a8a9c6923aa8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.557551 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" event={"ID":"a2c574eb-3877-43f5-a485-a8a9c6923aa8","Type":"ContainerDied","Data":"d11550e16d8a4aa56856f4368182d8a5d0d15dea858713cb9950ac51f18703b5"} Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.557611 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d11550e16d8a4aa56856f4368182d8a5d0d15dea858713cb9950ac51f18703b5" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.557619 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87" Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.634732 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r"] Feb 19 15:15:04 crc kubenswrapper[4861]: I0219 15:15:04.646333 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525190-sf94r"] Feb 19 15:15:06 crc kubenswrapper[4861]: I0219 15:15:06.002711 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c72b22-e089-4c38-99de-64bdd1553445" path="/var/lib/kubelet/pods/63c72b22-e089-4c38-99de-64bdd1553445/volumes" Feb 19 15:15:08 crc kubenswrapper[4861]: I0219 15:15:08.977495 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:15:08 crc kubenswrapper[4861]: E0219 15:15:08.978290 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:15:12 crc kubenswrapper[4861]: I0219 15:15:12.139713 4861 scope.go:117] "RemoveContainer" containerID="56d9dc8b411b04bb2f3cbdc60c208748bd7be020d08b21530a2123f2101602ba" Feb 19 15:15:23 crc kubenswrapper[4861]: I0219 15:15:23.980342 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:15:23 crc kubenswrapper[4861]: E0219 15:15:23.981804 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:15:33 crc kubenswrapper[4861]: I0219 15:15:33.886619 4861 generic.go:334] "Generic (PLEG): container finished" podID="634c282a-1ccd-4f31-a944-704e2cafa09a" containerID="cff2550a7fea94e8d88a3e03faad9e0dcd3786918716a7e87bec929f5fafa524" exitCode=0 Feb 19 15:15:33 crc kubenswrapper[4861]: I0219 15:15:33.886861 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" event={"ID":"634c282a-1ccd-4f31-a944-704e2cafa09a","Type":"ContainerDied","Data":"cff2550a7fea94e8d88a3e03faad9e0dcd3786918716a7e87bec929f5fafa524"} Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.484060 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.558727 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsktw\" (UniqueName: \"kubernetes.io/projected/634c282a-1ccd-4f31-a944-704e2cafa09a-kube-api-access-vsktw\") pod \"634c282a-1ccd-4f31-a944-704e2cafa09a\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.558794 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-ssh-key-openstack-cell1\") pod \"634c282a-1ccd-4f31-a944-704e2cafa09a\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.558980 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-inventory\") pod \"634c282a-1ccd-4f31-a944-704e2cafa09a\" (UID: \"634c282a-1ccd-4f31-a944-704e2cafa09a\") " Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.566211 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634c282a-1ccd-4f31-a944-704e2cafa09a-kube-api-access-vsktw" (OuterVolumeSpecName: "kube-api-access-vsktw") pod "634c282a-1ccd-4f31-a944-704e2cafa09a" (UID: "634c282a-1ccd-4f31-a944-704e2cafa09a"). InnerVolumeSpecName "kube-api-access-vsktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.603330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "634c282a-1ccd-4f31-a944-704e2cafa09a" (UID: "634c282a-1ccd-4f31-a944-704e2cafa09a"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.604403 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-inventory" (OuterVolumeSpecName: "inventory") pod "634c282a-1ccd-4f31-a944-704e2cafa09a" (UID: "634c282a-1ccd-4f31-a944-704e2cafa09a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.662033 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsktw\" (UniqueName: \"kubernetes.io/projected/634c282a-1ccd-4f31-a944-704e2cafa09a-kube-api-access-vsktw\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.662061 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.662071 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/634c282a-1ccd-4f31-a944-704e2cafa09a-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.908623 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" event={"ID":"634c282a-1ccd-4f31-a944-704e2cafa09a","Type":"ContainerDied","Data":"f4b04e67d7c24f0bead6ab24b4cdc9da7ce232fdf370682e3a997b9aae0225cb"} Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.908695 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b04e67d7c24f0bead6ab24b4cdc9da7ce232fdf370682e3a997b9aae0225cb" Feb 19 15:15:35 crc kubenswrapper[4861]: I0219 15:15:35.908699 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-ntcpr" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.038874 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-vmdpv"] Feb 19 15:15:36 crc kubenswrapper[4861]: E0219 15:15:36.039702 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634c282a-1ccd-4f31-a944-704e2cafa09a" containerName="download-cache-openstack-openstack-cell1" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.039736 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="634c282a-1ccd-4f31-a944-704e2cafa09a" containerName="download-cache-openstack-openstack-cell1" Feb 19 15:15:36 crc kubenswrapper[4861]: E0219 15:15:36.039771 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c574eb-3877-43f5-a485-a8a9c6923aa8" containerName="collect-profiles" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.039785 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c574eb-3877-43f5-a485-a8a9c6923aa8" containerName="collect-profiles" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.040190 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="634c282a-1ccd-4f31-a944-704e2cafa09a" containerName="download-cache-openstack-openstack-cell1" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.040239 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c574eb-3877-43f5-a485-a8a9c6923aa8" containerName="collect-profiles" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.041486 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.044834 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.046349 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.046664 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.051237 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-vmdpv"] Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.055370 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.070005 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.070090 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qr9\" (UniqueName: \"kubernetes.io/projected/b744679b-533f-4182-8f10-1b0160eda028-kube-api-access-w7qr9\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.070187 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-inventory\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.171238 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.171288 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qr9\" (UniqueName: \"kubernetes.io/projected/b744679b-533f-4182-8f10-1b0160eda028-kube-api-access-w7qr9\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.171348 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-inventory\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.177312 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.177367 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-inventory\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.191693 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qr9\" (UniqueName: \"kubernetes.io/projected/b744679b-533f-4182-8f10-1b0160eda028-kube-api-access-w7qr9\") pod \"configure-network-openstack-openstack-cell1-vmdpv\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.398903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:15:36 crc kubenswrapper[4861]: I0219 15:15:36.977286 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:15:36 crc kubenswrapper[4861]: E0219 15:15:36.977997 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:15:37 crc kubenswrapper[4861]: I0219 15:15:37.008649 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:15:37 crc kubenswrapper[4861]: I0219 15:15:37.010995 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-vmdpv"] Feb 19 15:15:37 crc kubenswrapper[4861]: I0219 15:15:37.931999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" event={"ID":"b744679b-533f-4182-8f10-1b0160eda028","Type":"ContainerStarted","Data":"16e55c962870470c7d242c5c845b88e27ac6b79af3e6ac2417f6b22d8ce62fc8"} Feb 19 15:15:38 crc kubenswrapper[4861]: I0219 15:15:38.944596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" event={"ID":"b744679b-533f-4182-8f10-1b0160eda028","Type":"ContainerStarted","Data":"e2ab7805262ac661af6a45765c3ee7c37db59dc128eadb7707bf8e556e760a1a"} Feb 19 15:15:38 crc kubenswrapper[4861]: I0219 15:15:38.979405 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" podStartSLOduration=2.346700956 podStartE2EDuration="2.979381735s" podCreationTimestamp="2026-02-19 15:15:36 +0000 UTC" firstStartedPulling="2026-02-19 15:15:37.008208295 +0000 UTC m=+7551.669311553" lastFinishedPulling="2026-02-19 15:15:37.640889094 +0000 UTC m=+7552.301992332" observedRunningTime="2026-02-19 15:15:38.969100574 +0000 UTC m=+7553.630203812" watchObservedRunningTime="2026-02-19 15:15:38.979381735 +0000 UTC m=+7553.640484973" Feb 19 15:15:48 crc kubenswrapper[4861]: I0219 15:15:48.978036 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:15:48 crc kubenswrapper[4861]: E0219 15:15:48.980305 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:16:02 crc kubenswrapper[4861]: I0219 15:16:02.978028 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:16:02 crc kubenswrapper[4861]: E0219 15:16:02.978786 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.318453 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2zbf"] Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.321460 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.329832 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2zbf"] Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.405502 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-utilities\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.405578 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nn8c\" (UniqueName: \"kubernetes.io/projected/ec290632-46b5-49f9-b96c-c8453dca96ba-kube-api-access-6nn8c\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.406020 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-catalog-content\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.508850 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-catalog-content\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.508912 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-utilities\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.508944 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nn8c\" (UniqueName: \"kubernetes.io/projected/ec290632-46b5-49f9-b96c-c8453dca96ba-kube-api-access-6nn8c\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.509744 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-catalog-content\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.509861 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-utilities\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.548384 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nn8c\" (UniqueName: \"kubernetes.io/projected/ec290632-46b5-49f9-b96c-c8453dca96ba-kube-api-access-6nn8c\") pod \"certified-operators-d2zbf\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:06 crc kubenswrapper[4861]: I0219 15:16:06.652668 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.102451 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m6xgn"] Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.104855 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.113137 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m6xgn"] Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.127280 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-utilities\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.127353 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-catalog-content\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.127491 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfs6x\" (UniqueName: \"kubernetes.io/projected/3af30563-8876-45d0-874d-3f63150e66b2-kube-api-access-dfs6x\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.195511 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2zbf"] Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.229207 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfs6x\" (UniqueName: \"kubernetes.io/projected/3af30563-8876-45d0-874d-3f63150e66b2-kube-api-access-dfs6x\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.229311 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-utilities\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.229361 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-catalog-content\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.233616 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-catalog-content\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.233705 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-utilities\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.249627 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfs6x\" (UniqueName: \"kubernetes.io/projected/3af30563-8876-45d0-874d-3f63150e66b2-kube-api-access-dfs6x\") pod \"community-operators-m6xgn\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.310621 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerStarted","Data":"610b84c1acfd1f2c19f34c1f98fa383452af035f33b4299344bb694cc742f61f"} Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.428019 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:07 crc kubenswrapper[4861]: I0219 15:16:07.975844 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m6xgn"] Feb 19 15:16:07 crc kubenswrapper[4861]: W0219 15:16:07.981011 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3af30563_8876_45d0_874d_3f63150e66b2.slice/crio-b3f7d8701aaf040937ea103c92f244f31ad24c8a551e79982e185a88f45bcf90 WatchSource:0}: Error finding container b3f7d8701aaf040937ea103c92f244f31ad24c8a551e79982e185a88f45bcf90: Status 404 returned error can't find the container with id b3f7d8701aaf040937ea103c92f244f31ad24c8a551e79982e185a88f45bcf90 Feb 19 15:16:08 crc kubenswrapper[4861]: I0219 15:16:08.320553 4861 generic.go:334] "Generic (PLEG): container finished" podID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerID="8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c" exitCode=0 Feb 19 15:16:08 crc kubenswrapper[4861]: I0219 15:16:08.320623 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerDied","Data":"8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c"} Feb 19 15:16:08 crc kubenswrapper[4861]: I0219 15:16:08.322798 4861 generic.go:334] "Generic (PLEG): container finished" podID="3af30563-8876-45d0-874d-3f63150e66b2" containerID="42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073" exitCode=0 Feb 19 15:16:08 crc kubenswrapper[4861]: I0219 15:16:08.322837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerDied","Data":"42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073"} Feb 19 15:16:08 crc kubenswrapper[4861]: I0219 15:16:08.322876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerStarted","Data":"b3f7d8701aaf040937ea103c92f244f31ad24c8a551e79982e185a88f45bcf90"} Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.334380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerStarted","Data":"ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c"} Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.904231 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-29gz9"] Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.906675 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.919276 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29gz9"] Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.994484 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-utilities\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.994937 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4hh\" (UniqueName: \"kubernetes.io/projected/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-kube-api-access-qb4hh\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:09 crc kubenswrapper[4861]: I0219 15:16:09.995995 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-catalog-content\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.098310 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4hh\" (UniqueName: \"kubernetes.io/projected/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-kube-api-access-qb4hh\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.098631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-catalog-content\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.098787 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-utilities\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.102579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-catalog-content\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.102642 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-utilities\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.126296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4hh\" (UniqueName: \"kubernetes.io/projected/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-kube-api-access-qb4hh\") pod \"redhat-operators-29gz9\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.268168 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.358343 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerStarted","Data":"c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6"} Feb 19 15:16:10 crc kubenswrapper[4861]: I0219 15:16:10.771318 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29gz9"] Feb 19 15:16:10 crc kubenswrapper[4861]: W0219 15:16:10.776657 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69eac4ab_83d0_44cb_8748_fd7cf5d00e64.slice/crio-85fe9fc42904989f562b9e943d9450871b9f7ce19f49144a6de508615e8a2e5c WatchSource:0}: Error finding container 85fe9fc42904989f562b9e943d9450871b9f7ce19f49144a6de508615e8a2e5c: Status 404 returned error can't find the container with id 85fe9fc42904989f562b9e943d9450871b9f7ce19f49144a6de508615e8a2e5c Feb 19 15:16:11 crc kubenswrapper[4861]: I0219 15:16:11.369023 4861 generic.go:334] "Generic (PLEG): container finished" podID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerID="e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448" exitCode=0 Feb 19 15:16:11 crc kubenswrapper[4861]: I0219 15:16:11.369096 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerDied","Data":"e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448"} Feb 19 15:16:11 crc kubenswrapper[4861]: I0219 15:16:11.370251 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerStarted","Data":"85fe9fc42904989f562b9e943d9450871b9f7ce19f49144a6de508615e8a2e5c"} Feb 19 15:16:11 crc kubenswrapper[4861]: E0219 15:16:11.962462 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3af30563_8876_45d0_874d_3f63150e66b2.slice/crio-conmon-c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:16:12 crc kubenswrapper[4861]: I0219 15:16:12.384275 4861 generic.go:334] "Generic (PLEG): container finished" podID="3af30563-8876-45d0-874d-3f63150e66b2" containerID="c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6" exitCode=0 Feb 19 15:16:12 crc kubenswrapper[4861]: I0219 15:16:12.384680 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerDied","Data":"c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6"} Feb 19 15:16:12 crc kubenswrapper[4861]: I0219 15:16:12.393267 4861 generic.go:334] "Generic (PLEG): container finished" podID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerID="ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c" exitCode=0 Feb 19 15:16:12 crc kubenswrapper[4861]: I0219 15:16:12.393302 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerDied","Data":"ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c"} Feb 19 15:16:13 crc kubenswrapper[4861]: I0219 15:16:13.404570 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerStarted","Data":"a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b"} Feb 19 15:16:13 crc kubenswrapper[4861]: I0219 15:16:13.408820 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerStarted","Data":"c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240"} Feb 19 15:16:13 crc kubenswrapper[4861]: I0219 15:16:13.411333 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerStarted","Data":"02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21"} Feb 19 15:16:13 crc kubenswrapper[4861]: I0219 15:16:13.441998 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2zbf" podStartSLOduration=2.7817485619999998 podStartE2EDuration="7.441976454s" podCreationTimestamp="2026-02-19 15:16:06 +0000 UTC" firstStartedPulling="2026-02-19 15:16:08.323008593 +0000 UTC m=+7582.984111811" lastFinishedPulling="2026-02-19 15:16:12.983236475 +0000 UTC m=+7587.644339703" observedRunningTime="2026-02-19 15:16:13.437975618 +0000 UTC m=+7588.099078846" watchObservedRunningTime="2026-02-19 15:16:13.441976454 +0000 UTC m=+7588.103079692" Feb 19 15:16:13 crc kubenswrapper[4861]: I0219 15:16:13.492640 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m6xgn" podStartSLOduration=2.023736751 podStartE2EDuration="6.492585862s" podCreationTimestamp="2026-02-19 15:16:07 +0000 UTC" firstStartedPulling="2026-02-19 15:16:08.323930108 +0000 UTC m=+7582.985033336" lastFinishedPulling="2026-02-19 15:16:12.792779179 +0000 UTC m=+7587.453882447" observedRunningTime="2026-02-19 15:16:13.475792368 +0000 UTC m=+7588.136895656" watchObservedRunningTime="2026-02-19 15:16:13.492585862 +0000 UTC m=+7588.153689100" Feb 19 15:16:13 crc kubenswrapper[4861]: I0219 15:16:13.977880 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:16:13 crc kubenswrapper[4861]: E0219 15:16:13.978878 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:16:16 crc kubenswrapper[4861]: I0219 15:16:16.653852 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:16 crc kubenswrapper[4861]: I0219 15:16:16.654253 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:16 crc kubenswrapper[4861]: I0219 15:16:16.731793 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:17 crc kubenswrapper[4861]: I0219 15:16:17.428669 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:17 crc kubenswrapper[4861]: I0219 15:16:17.428716 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:17 crc kubenswrapper[4861]: I0219 15:16:17.464938 4861 generic.go:334] "Generic (PLEG): container finished" podID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerID="02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21" exitCode=0 Feb 19 15:16:17 crc kubenswrapper[4861]: I0219 15:16:17.465258 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerDied","Data":"02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21"} Feb 19 15:16:18 crc kubenswrapper[4861]: I0219 15:16:18.476394 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerStarted","Data":"644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469"} Feb 19 15:16:18 crc kubenswrapper[4861]: I0219 15:16:18.483931 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m6xgn" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="registry-server" probeResult="failure" output=< Feb 19 15:16:18 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:16:18 crc kubenswrapper[4861]: > Feb 19 15:16:18 crc kubenswrapper[4861]: I0219 15:16:18.505819 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-29gz9" podStartSLOduration=2.972105868 podStartE2EDuration="9.505787745s" podCreationTimestamp="2026-02-19 15:16:09 +0000 UTC" firstStartedPulling="2026-02-19 15:16:11.371076037 +0000 UTC m=+7586.032179275" lastFinishedPulling="2026-02-19 15:16:17.904757894 +0000 UTC m=+7592.565861152" observedRunningTime="2026-02-19 15:16:18.502863988 +0000 UTC m=+7593.163967246" watchObservedRunningTime="2026-02-19 15:16:18.505787745 +0000 UTC m=+7593.166890993" Feb 19 15:16:20 crc kubenswrapper[4861]: I0219 15:16:20.268717 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:20 crc kubenswrapper[4861]: I0219 15:16:20.269905 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:21 crc kubenswrapper[4861]: I0219 15:16:21.331634 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-29gz9" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="registry-server" probeResult="failure" output=< Feb 19 15:16:21 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:16:21 crc kubenswrapper[4861]: > Feb 19 15:16:26 crc kubenswrapper[4861]: I0219 15:16:26.708238 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:26 crc kubenswrapper[4861]: I0219 15:16:26.768552 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2zbf"] Feb 19 15:16:27 crc kubenswrapper[4861]: I0219 15:16:27.497799 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:27 crc kubenswrapper[4861]: I0219 15:16:27.568516 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2zbf" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="registry-server" containerID="cri-o://a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b" gracePeriod=2 Feb 19 15:16:27 crc kubenswrapper[4861]: I0219 15:16:27.589851 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:27 crc kubenswrapper[4861]: I0219 15:16:27.978538 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:16:27 crc kubenswrapper[4861]: E0219 15:16:27.979241 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.049494 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.145907 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-catalog-content\") pod \"ec290632-46b5-49f9-b96c-c8453dca96ba\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.145992 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-utilities\") pod \"ec290632-46b5-49f9-b96c-c8453dca96ba\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.146070 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nn8c\" (UniqueName: \"kubernetes.io/projected/ec290632-46b5-49f9-b96c-c8453dca96ba-kube-api-access-6nn8c\") pod \"ec290632-46b5-49f9-b96c-c8453dca96ba\" (UID: \"ec290632-46b5-49f9-b96c-c8453dca96ba\") " Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.147478 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-utilities" (OuterVolumeSpecName: "utilities") pod "ec290632-46b5-49f9-b96c-c8453dca96ba" (UID: "ec290632-46b5-49f9-b96c-c8453dca96ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.153292 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec290632-46b5-49f9-b96c-c8453dca96ba-kube-api-access-6nn8c" (OuterVolumeSpecName: "kube-api-access-6nn8c") pod "ec290632-46b5-49f9-b96c-c8453dca96ba" (UID: "ec290632-46b5-49f9-b96c-c8453dca96ba"). InnerVolumeSpecName "kube-api-access-6nn8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.194265 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec290632-46b5-49f9-b96c-c8453dca96ba" (UID: "ec290632-46b5-49f9-b96c-c8453dca96ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.248985 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.249071 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec290632-46b5-49f9-b96c-c8453dca96ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.249099 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nn8c\" (UniqueName: \"kubernetes.io/projected/ec290632-46b5-49f9-b96c-c8453dca96ba-kube-api-access-6nn8c\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.558535 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m6xgn"] Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.584902 4861 generic.go:334] "Generic (PLEG): container finished" podID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerID="a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b" exitCode=0 Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.584987 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2zbf" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.584995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerDied","Data":"a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b"} Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.585055 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2zbf" event={"ID":"ec290632-46b5-49f9-b96c-c8453dca96ba","Type":"ContainerDied","Data":"610b84c1acfd1f2c19f34c1f98fa383452af035f33b4299344bb694cc742f61f"} Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.585078 4861 scope.go:117] "RemoveContainer" containerID="a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.585844 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m6xgn" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="registry-server" containerID="cri-o://c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240" gracePeriod=2 Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.631916 4861 scope.go:117] "RemoveContainer" containerID="ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.635288 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2zbf"] Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.645606 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2zbf"] Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.661209 4861 scope.go:117] "RemoveContainer" containerID="8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.839199 4861 scope.go:117] "RemoveContainer" containerID="a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b" Feb 19 15:16:28 crc kubenswrapper[4861]: E0219 15:16:28.839788 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b\": container with ID starting with a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b not found: ID does not exist" containerID="a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.839835 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b"} err="failed to get container status \"a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b\": rpc error: code = NotFound desc = could not find container \"a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b\": container with ID starting with a0783b53c75fcc0a90592e1c104da06feaf7daa3ebcd50ca322facefeedee32b not found: ID does not exist" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.839865 4861 scope.go:117] "RemoveContainer" containerID="ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c" Feb 19 15:16:28 crc kubenswrapper[4861]: E0219 15:16:28.840730 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c\": container with ID starting with ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c not found: ID does not exist" containerID="ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.840763 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c"} err="failed to get container status \"ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c\": rpc error: code = NotFound desc = could not find container \"ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c\": container with ID starting with ff6c2327304cc8534a533808751cf27f2b0b91cbcb730f4dafb6afa1d9296b9c not found: ID does not exist" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.840783 4861 scope.go:117] "RemoveContainer" containerID="8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c" Feb 19 15:16:28 crc kubenswrapper[4861]: E0219 15:16:28.842795 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c\": container with ID starting with 8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c not found: ID does not exist" containerID="8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c" Feb 19 15:16:28 crc kubenswrapper[4861]: I0219 15:16:28.842823 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c"} err="failed to get container status \"8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c\": rpc error: code = NotFound desc = could not find container \"8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c\": container with ID starting with 8de09a756ddd401f186456bfb0ae80d6d45792ff4a580961eed106bfe1521d5c not found: ID does not exist" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.228967 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.299091 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-utilities\") pod \"3af30563-8876-45d0-874d-3f63150e66b2\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.299295 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfs6x\" (UniqueName: \"kubernetes.io/projected/3af30563-8876-45d0-874d-3f63150e66b2-kube-api-access-dfs6x\") pod \"3af30563-8876-45d0-874d-3f63150e66b2\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.299316 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-catalog-content\") pod \"3af30563-8876-45d0-874d-3f63150e66b2\" (UID: \"3af30563-8876-45d0-874d-3f63150e66b2\") " Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.304965 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-utilities" (OuterVolumeSpecName: "utilities") pod "3af30563-8876-45d0-874d-3f63150e66b2" (UID: "3af30563-8876-45d0-874d-3f63150e66b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.309866 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af30563-8876-45d0-874d-3f63150e66b2-kube-api-access-dfs6x" (OuterVolumeSpecName: "kube-api-access-dfs6x") pod "3af30563-8876-45d0-874d-3f63150e66b2" (UID: "3af30563-8876-45d0-874d-3f63150e66b2"). InnerVolumeSpecName "kube-api-access-dfs6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.343723 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3af30563-8876-45d0-874d-3f63150e66b2" (UID: "3af30563-8876-45d0-874d-3f63150e66b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.401333 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfs6x\" (UniqueName: \"kubernetes.io/projected/3af30563-8876-45d0-874d-3f63150e66b2-kube-api-access-dfs6x\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.401372 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.401386 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af30563-8876-45d0-874d-3f63150e66b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.603548 4861 generic.go:334] "Generic (PLEG): container finished" podID="3af30563-8876-45d0-874d-3f63150e66b2" containerID="c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240" exitCode=0 Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.603605 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerDied","Data":"c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240"} Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.603669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6xgn" event={"ID":"3af30563-8876-45d0-874d-3f63150e66b2","Type":"ContainerDied","Data":"b3f7d8701aaf040937ea103c92f244f31ad24c8a551e79982e185a88f45bcf90"} Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.603685 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6xgn" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.603695 4861 scope.go:117] "RemoveContainer" containerID="c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.651195 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m6xgn"] Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.652852 4861 scope.go:117] "RemoveContainer" containerID="c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.675759 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m6xgn"] Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.699687 4861 scope.go:117] "RemoveContainer" containerID="42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.728879 4861 scope.go:117] "RemoveContainer" containerID="c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240" Feb 19 15:16:29 crc kubenswrapper[4861]: E0219 15:16:29.729447 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240\": container with ID starting with c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240 not found: ID does not exist" containerID="c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.729489 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240"} err="failed to get container status \"c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240\": rpc error: code = NotFound desc = could not find container \"c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240\": container with ID starting with c78d61d05e955b706c81ed202c1f881955647e4fdb1250fcabd45be42aaad240 not found: ID does not exist" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.729515 4861 scope.go:117] "RemoveContainer" containerID="c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6" Feb 19 15:16:29 crc kubenswrapper[4861]: E0219 15:16:29.729902 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6\": container with ID starting with c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6 not found: ID does not exist" containerID="c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.729936 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6"} err="failed to get container status \"c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6\": rpc error: code = NotFound desc = could not find container \"c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6\": container with ID starting with c17f5dec08321e7b36b9733ef5cbfc2cc156a93450dc2eb62a114e2c6fff3bd6 not found: ID does not exist" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.729959 4861 scope.go:117] "RemoveContainer" containerID="42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073" Feb 19 15:16:29 crc kubenswrapper[4861]: E0219 15:16:29.730246 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073\": container with ID starting with 42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073 not found: ID does not exist" containerID="42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073" Feb 19 15:16:29 crc kubenswrapper[4861]: I0219 15:16:29.730279 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073"} err="failed to get container status \"42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073\": rpc error: code = NotFound desc = could not find container \"42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073\": container with ID starting with 42768dbc1838a0ad7142a86f940f72ada312a2a2a7511b409677385eb480e073 not found: ID does not exist" Feb 19 15:16:30 crc kubenswrapper[4861]: I0219 15:16:30.001889 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af30563-8876-45d0-874d-3f63150e66b2" path="/var/lib/kubelet/pods/3af30563-8876-45d0-874d-3f63150e66b2/volumes" Feb 19 15:16:30 crc kubenswrapper[4861]: I0219 15:16:30.004268 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" path="/var/lib/kubelet/pods/ec290632-46b5-49f9-b96c-c8453dca96ba/volumes" Feb 19 15:16:31 crc kubenswrapper[4861]: I0219 15:16:31.331675 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-29gz9" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="registry-server" probeResult="failure" output=< Feb 19 15:16:31 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:16:31 crc kubenswrapper[4861]: > Feb 19 15:16:40 crc kubenswrapper[4861]: I0219 15:16:40.333703 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:40 crc kubenswrapper[4861]: I0219 15:16:40.389518 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:40 crc kubenswrapper[4861]: I0219 15:16:40.584411 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29gz9"] Feb 19 15:16:41 crc kubenswrapper[4861]: I0219 15:16:41.764671 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-29gz9" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="registry-server" containerID="cri-o://644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469" gracePeriod=2 Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.313134 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.445910 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb4hh\" (UniqueName: \"kubernetes.io/projected/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-kube-api-access-qb4hh\") pod \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.445953 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-utilities\") pod \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.446113 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-catalog-content\") pod \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\" (UID: \"69eac4ab-83d0-44cb-8748-fd7cf5d00e64\") " Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.447317 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-utilities" (OuterVolumeSpecName: "utilities") pod "69eac4ab-83d0-44cb-8748-fd7cf5d00e64" (UID: "69eac4ab-83d0-44cb-8748-fd7cf5d00e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.450797 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-kube-api-access-qb4hh" (OuterVolumeSpecName: "kube-api-access-qb4hh") pod "69eac4ab-83d0-44cb-8748-fd7cf5d00e64" (UID: "69eac4ab-83d0-44cb-8748-fd7cf5d00e64"). InnerVolumeSpecName "kube-api-access-qb4hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.549246 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb4hh\" (UniqueName: \"kubernetes.io/projected/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-kube-api-access-qb4hh\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.549298 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.572488 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69eac4ab-83d0-44cb-8748-fd7cf5d00e64" (UID: "69eac4ab-83d0-44cb-8748-fd7cf5d00e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.651823 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eac4ab-83d0-44cb-8748-fd7cf5d00e64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.796081 4861 generic.go:334] "Generic (PLEG): container finished" podID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerID="644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469" exitCode=0 Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.796135 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerDied","Data":"644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469"} Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.796162 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29gz9" event={"ID":"69eac4ab-83d0-44cb-8748-fd7cf5d00e64","Type":"ContainerDied","Data":"85fe9fc42904989f562b9e943d9450871b9f7ce19f49144a6de508615e8a2e5c"} Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.796167 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29gz9" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.796177 4861 scope.go:117] "RemoveContainer" containerID="644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.830247 4861 scope.go:117] "RemoveContainer" containerID="02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.838937 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29gz9"] Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.848687 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-29gz9"] Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.867342 4861 scope.go:117] "RemoveContainer" containerID="e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.915957 4861 scope.go:117] "RemoveContainer" containerID="644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469" Feb 19 15:16:42 crc kubenswrapper[4861]: E0219 15:16:42.916540 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469\": container with ID starting with 644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469 not found: ID does not exist" containerID="644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.916570 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469"} err="failed to get container status \"644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469\": rpc error: code = NotFound desc = could not find container \"644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469\": container with ID starting with 644c410b5d3653e562b159eb544b07240fd0c39879660d48bdb13cd996113469 not found: ID does not exist" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.916594 4861 scope.go:117] "RemoveContainer" containerID="02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21" Feb 19 15:16:42 crc kubenswrapper[4861]: E0219 15:16:42.916851 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21\": container with ID starting with 02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21 not found: ID does not exist" containerID="02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.916870 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21"} err="failed to get container status \"02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21\": rpc error: code = NotFound desc = could not find container \"02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21\": container with ID starting with 02ec812064cb3c12237405b1369286fa1b1851768375e04cc2f121c5652f6f21 not found: ID does not exist" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.916882 4861 scope.go:117] "RemoveContainer" containerID="e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448" Feb 19 15:16:42 crc kubenswrapper[4861]: E0219 15:16:42.917280 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448\": container with ID starting with e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448 not found: ID does not exist" containerID="e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.917297 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448"} err="failed to get container status \"e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448\": rpc error: code = NotFound desc = could not find container \"e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448\": container with ID starting with e1d6be331f835b6cfac2d264d85d69bec2682936bef45de353987839e4224448 not found: ID does not exist" Feb 19 15:16:42 crc kubenswrapper[4861]: E0219 15:16:42.919406 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69eac4ab_83d0_44cb_8748_fd7cf5d00e64.slice\": RecentStats: unable to find data in memory cache]" Feb 19 15:16:42 crc kubenswrapper[4861]: I0219 15:16:42.978797 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:16:42 crc kubenswrapper[4861]: E0219 15:16:42.979187 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:16:43 crc kubenswrapper[4861]: I0219 15:16:43.993577 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" path="/var/lib/kubelet/pods/69eac4ab-83d0-44cb-8748-fd7cf5d00e64/volumes" Feb 19 15:16:57 crc kubenswrapper[4861]: I0219 15:16:57.977451 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:16:57 crc kubenswrapper[4861]: E0219 15:16:57.978303 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:17:04 crc kubenswrapper[4861]: I0219 15:17:04.071347 4861 generic.go:334] "Generic (PLEG): container finished" podID="b744679b-533f-4182-8f10-1b0160eda028" containerID="e2ab7805262ac661af6a45765c3ee7c37db59dc128eadb7707bf8e556e760a1a" exitCode=0 Feb 19 15:17:04 crc kubenswrapper[4861]: I0219 15:17:04.071461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" event={"ID":"b744679b-533f-4182-8f10-1b0160eda028","Type":"ContainerDied","Data":"e2ab7805262ac661af6a45765c3ee7c37db59dc128eadb7707bf8e556e760a1a"} Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.654442 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.683328 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-ssh-key-openstack-cell1\") pod \"b744679b-533f-4182-8f10-1b0160eda028\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.684853 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-inventory\") pod \"b744679b-533f-4182-8f10-1b0160eda028\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.685001 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7qr9\" (UniqueName: \"kubernetes.io/projected/b744679b-533f-4182-8f10-1b0160eda028-kube-api-access-w7qr9\") pod \"b744679b-533f-4182-8f10-1b0160eda028\" (UID: \"b744679b-533f-4182-8f10-1b0160eda028\") " Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.690337 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b744679b-533f-4182-8f10-1b0160eda028-kube-api-access-w7qr9" (OuterVolumeSpecName: "kube-api-access-w7qr9") pod "b744679b-533f-4182-8f10-1b0160eda028" (UID: "b744679b-533f-4182-8f10-1b0160eda028"). InnerVolumeSpecName "kube-api-access-w7qr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.720010 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "b744679b-533f-4182-8f10-1b0160eda028" (UID: "b744679b-533f-4182-8f10-1b0160eda028"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.736481 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-inventory" (OuterVolumeSpecName: "inventory") pod "b744679b-533f-4182-8f10-1b0160eda028" (UID: "b744679b-533f-4182-8f10-1b0160eda028"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.788321 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7qr9\" (UniqueName: \"kubernetes.io/projected/b744679b-533f-4182-8f10-1b0160eda028-kube-api-access-w7qr9\") on node \"crc\" DevicePath \"\"" Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.788388 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:17:05 crc kubenswrapper[4861]: I0219 15:17:05.788416 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b744679b-533f-4182-8f10-1b0160eda028-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.101794 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" event={"ID":"b744679b-533f-4182-8f10-1b0160eda028","Type":"ContainerDied","Data":"16e55c962870470c7d242c5c845b88e27ac6b79af3e6ac2417f6b22d8ce62fc8"} Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.102205 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e55c962870470c7d242c5c845b88e27ac6b79af3e6ac2417f6b22d8ce62fc8" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.101830 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-vmdpv" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208022 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-2m2jd"] Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208593 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="extract-content" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208614 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="extract-content" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208640 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="extract-content" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208649 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="extract-content" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208661 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208669 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208679 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208685 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208709 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208716 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208743 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="extract-utilities" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208752 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="extract-utilities" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208767 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="extract-utilities" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208775 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="extract-utilities" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208790 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="extract-content" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208798 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="extract-content" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208823 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="extract-utilities" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208830 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="extract-utilities" Feb 19 15:17:06 crc kubenswrapper[4861]: E0219 15:17:06.208843 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b744679b-533f-4182-8f10-1b0160eda028" containerName="configure-network-openstack-openstack-cell1" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.208851 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b744679b-533f-4182-8f10-1b0160eda028" containerName="configure-network-openstack-openstack-cell1" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.209137 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="69eac4ab-83d0-44cb-8748-fd7cf5d00e64" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.209156 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec290632-46b5-49f9-b96c-c8453dca96ba" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.209178 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b744679b-533f-4182-8f10-1b0160eda028" containerName="configure-network-openstack-openstack-cell1" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.209192 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af30563-8876-45d0-874d-3f63150e66b2" containerName="registry-server" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.210155 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.212750 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.213072 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.213287 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.218920 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.220959 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-2m2jd"] Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.302301 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-inventory\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.302640 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fzv\" (UniqueName: \"kubernetes.io/projected/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-kube-api-access-28fzv\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.302730 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.405346 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-inventory\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.405668 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28fzv\" (UniqueName: \"kubernetes.io/projected/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-kube-api-access-28fzv\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.405753 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.413224 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-inventory\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.415858 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.424849 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28fzv\" (UniqueName: \"kubernetes.io/projected/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-kube-api-access-28fzv\") pod \"validate-network-openstack-openstack-cell1-2m2jd\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:06 crc kubenswrapper[4861]: I0219 15:17:06.542658 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:07 crc kubenswrapper[4861]: I0219 15:17:07.180837 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-2m2jd"] Feb 19 15:17:08 crc kubenswrapper[4861]: I0219 15:17:08.127028 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" event={"ID":"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0","Type":"ContainerStarted","Data":"a6bc6d5942731fcac907ffdc544db212945cdb77741d0e53ae68beafa9734443"} Feb 19 15:17:08 crc kubenswrapper[4861]: I0219 15:17:08.127463 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" event={"ID":"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0","Type":"ContainerStarted","Data":"ef4ada2da37caea46862ce24b20250f1e749249210a9513ea7ce97a137226d29"} Feb 19 15:17:08 crc kubenswrapper[4861]: I0219 15:17:08.159924 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" podStartSLOduration=1.72418019 podStartE2EDuration="2.159901521s" podCreationTimestamp="2026-02-19 15:17:06 +0000 UTC" firstStartedPulling="2026-02-19 15:17:07.188415824 +0000 UTC m=+7641.849519092" lastFinishedPulling="2026-02-19 15:17:07.624137145 +0000 UTC m=+7642.285240423" observedRunningTime="2026-02-19 15:17:08.158703489 +0000 UTC m=+7642.819806727" watchObservedRunningTime="2026-02-19 15:17:08.159901521 +0000 UTC m=+7642.821004749" Feb 19 15:17:09 crc kubenswrapper[4861]: I0219 15:17:09.977046 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:17:09 crc kubenswrapper[4861]: E0219 15:17:09.977725 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:17:13 crc kubenswrapper[4861]: I0219 15:17:13.185675 4861 generic.go:334] "Generic (PLEG): container finished" podID="c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" containerID="a6bc6d5942731fcac907ffdc544db212945cdb77741d0e53ae68beafa9734443" exitCode=0 Feb 19 15:17:13 crc kubenswrapper[4861]: I0219 15:17:13.185860 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" event={"ID":"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0","Type":"ContainerDied","Data":"a6bc6d5942731fcac907ffdc544db212945cdb77741d0e53ae68beafa9734443"} Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.677247 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.755737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-inventory\") pod \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.756162 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-ssh-key-openstack-cell1\") pod \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.756287 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28fzv\" (UniqueName: \"kubernetes.io/projected/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-kube-api-access-28fzv\") pod \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\" (UID: \"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0\") " Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.762025 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-kube-api-access-28fzv" (OuterVolumeSpecName: "kube-api-access-28fzv") pod "c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" (UID: "c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0"). InnerVolumeSpecName "kube-api-access-28fzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.789957 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" (UID: "c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.797813 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-inventory" (OuterVolumeSpecName: "inventory") pod "c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" (UID: "c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.860110 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.860151 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:17:14 crc kubenswrapper[4861]: I0219 15:17:14.860166 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28fzv\" (UniqueName: \"kubernetes.io/projected/c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0-kube-api-access-28fzv\") on node \"crc\" DevicePath \"\"" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.211643 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" event={"ID":"c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0","Type":"ContainerDied","Data":"ef4ada2da37caea46862ce24b20250f1e749249210a9513ea7ce97a137226d29"} Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.211726 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef4ada2da37caea46862ce24b20250f1e749249210a9513ea7ce97a137226d29" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.211764 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-2m2jd" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.339549 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-pbmtw"] Feb 19 15:17:15 crc kubenswrapper[4861]: E0219 15:17:15.340190 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" containerName="validate-network-openstack-openstack-cell1" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.340220 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" containerName="validate-network-openstack-openstack-cell1" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.340615 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0" containerName="validate-network-openstack-openstack-cell1" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.341881 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.344897 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.344938 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.345294 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.345314 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.354218 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-pbmtw"] Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.474241 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-inventory\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.474301 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6h6p\" (UniqueName: \"kubernetes.io/projected/158f5f29-6b5f-43dc-a5a3-12353999439f-kube-api-access-k6h6p\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.474339 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.576217 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-inventory\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.576337 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6h6p\" (UniqueName: \"kubernetes.io/projected/158f5f29-6b5f-43dc-a5a3-12353999439f-kube-api-access-k6h6p\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.576414 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.580813 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.583381 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-inventory\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.604862 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6h6p\" (UniqueName: \"kubernetes.io/projected/158f5f29-6b5f-43dc-a5a3-12353999439f-kube-api-access-k6h6p\") pod \"install-os-openstack-openstack-cell1-pbmtw\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:15 crc kubenswrapper[4861]: I0219 15:17:15.672542 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:17:16 crc kubenswrapper[4861]: I0219 15:17:16.293963 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-pbmtw"] Feb 19 15:17:17 crc kubenswrapper[4861]: I0219 15:17:17.232254 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" event={"ID":"158f5f29-6b5f-43dc-a5a3-12353999439f","Type":"ContainerStarted","Data":"f0b01715e0c4d9427bedae76ae997fc70cca64958fbecb214664220a312dad9f"} Feb 19 15:17:17 crc kubenswrapper[4861]: I0219 15:17:17.232705 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" event={"ID":"158f5f29-6b5f-43dc-a5a3-12353999439f","Type":"ContainerStarted","Data":"0d32a6ab1a28a9969d9a3e449eb1a35524004cff3c389a757c380f79da9cd93e"} Feb 19 15:17:17 crc kubenswrapper[4861]: I0219 15:17:17.259524 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" podStartSLOduration=1.789453554 podStartE2EDuration="2.259507903s" podCreationTimestamp="2026-02-19 15:17:15 +0000 UTC" firstStartedPulling="2026-02-19 15:17:16.300653659 +0000 UTC m=+7650.961756927" lastFinishedPulling="2026-02-19 15:17:16.770708048 +0000 UTC m=+7651.431811276" observedRunningTime="2026-02-19 15:17:17.249543879 +0000 UTC m=+7651.910647097" watchObservedRunningTime="2026-02-19 15:17:17.259507903 +0000 UTC m=+7651.920611131" Feb 19 15:17:22 crc kubenswrapper[4861]: I0219 15:17:22.977987 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:17:22 crc kubenswrapper[4861]: E0219 15:17:22.979158 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:17:34 crc kubenswrapper[4861]: I0219 15:17:34.976924 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:17:34 crc kubenswrapper[4861]: E0219 15:17:34.978090 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:17:49 crc kubenswrapper[4861]: I0219 15:17:49.977965 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:17:49 crc kubenswrapper[4861]: E0219 15:17:49.978805 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:18:03 crc kubenswrapper[4861]: I0219 15:18:03.745257 4861 generic.go:334] "Generic (PLEG): container finished" podID="158f5f29-6b5f-43dc-a5a3-12353999439f" containerID="f0b01715e0c4d9427bedae76ae997fc70cca64958fbecb214664220a312dad9f" exitCode=0 Feb 19 15:18:03 crc kubenswrapper[4861]: I0219 15:18:03.745308 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" event={"ID":"158f5f29-6b5f-43dc-a5a3-12353999439f","Type":"ContainerDied","Data":"f0b01715e0c4d9427bedae76ae997fc70cca64958fbecb214664220a312dad9f"} Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:04.977336 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:18:05 crc kubenswrapper[4861]: E0219 15:18:04.977911 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.224488 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.324625 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-inventory\") pod \"158f5f29-6b5f-43dc-a5a3-12353999439f\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.324686 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-ssh-key-openstack-cell1\") pod \"158f5f29-6b5f-43dc-a5a3-12353999439f\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.324892 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6h6p\" (UniqueName: \"kubernetes.io/projected/158f5f29-6b5f-43dc-a5a3-12353999439f-kube-api-access-k6h6p\") pod \"158f5f29-6b5f-43dc-a5a3-12353999439f\" (UID: \"158f5f29-6b5f-43dc-a5a3-12353999439f\") " Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.330056 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158f5f29-6b5f-43dc-a5a3-12353999439f-kube-api-access-k6h6p" (OuterVolumeSpecName: "kube-api-access-k6h6p") pod "158f5f29-6b5f-43dc-a5a3-12353999439f" (UID: "158f5f29-6b5f-43dc-a5a3-12353999439f"). InnerVolumeSpecName "kube-api-access-k6h6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.357951 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "158f5f29-6b5f-43dc-a5a3-12353999439f" (UID: "158f5f29-6b5f-43dc-a5a3-12353999439f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.361147 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-inventory" (OuterVolumeSpecName: "inventory") pod "158f5f29-6b5f-43dc-a5a3-12353999439f" (UID: "158f5f29-6b5f-43dc-a5a3-12353999439f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.429319 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.429362 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/158f5f29-6b5f-43dc-a5a3-12353999439f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.429390 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6h6p\" (UniqueName: \"kubernetes.io/projected/158f5f29-6b5f-43dc-a5a3-12353999439f-kube-api-access-k6h6p\") on node \"crc\" DevicePath \"\"" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.767083 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" event={"ID":"158f5f29-6b5f-43dc-a5a3-12353999439f","Type":"ContainerDied","Data":"0d32a6ab1a28a9969d9a3e449eb1a35524004cff3c389a757c380f79da9cd93e"} Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.767130 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d32a6ab1a28a9969d9a3e449eb1a35524004cff3c389a757c380f79da9cd93e" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.767211 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-pbmtw" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.885583 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cgfm6"] Feb 19 15:18:05 crc kubenswrapper[4861]: E0219 15:18:05.886207 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158f5f29-6b5f-43dc-a5a3-12353999439f" containerName="install-os-openstack-openstack-cell1" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.886250 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="158f5f29-6b5f-43dc-a5a3-12353999439f" containerName="install-os-openstack-openstack-cell1" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.886583 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="158f5f29-6b5f-43dc-a5a3-12353999439f" containerName="install-os-openstack-openstack-cell1" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.887612 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.892454 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.893072 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.893209 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.893362 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:18:05 crc kubenswrapper[4861]: I0219 15:18:05.897985 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cgfm6"] Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.042276 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4jb\" (UniqueName: \"kubernetes.io/projected/386cc4aa-29ac-48cf-9b15-5a3587a6245e-kube-api-access-vj4jb\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.042679 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.042765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-inventory\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.144883 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4jb\" (UniqueName: \"kubernetes.io/projected/386cc4aa-29ac-48cf-9b15-5a3587a6245e-kube-api-access-vj4jb\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.145034 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.145171 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-inventory\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.155092 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.155097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-inventory\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.164215 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4jb\" (UniqueName: \"kubernetes.io/projected/386cc4aa-29ac-48cf-9b15-5a3587a6245e-kube-api-access-vj4jb\") pod \"configure-os-openstack-openstack-cell1-cgfm6\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.220271 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.657871 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cgfm6"] Feb 19 15:18:06 crc kubenswrapper[4861]: W0219 15:18:06.658700 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod386cc4aa_29ac_48cf_9b15_5a3587a6245e.slice/crio-2317ad1df3d19116511a2afc21ce921309107a1b1d591f339f2f1e84a70b469d WatchSource:0}: Error finding container 2317ad1df3d19116511a2afc21ce921309107a1b1d591f339f2f1e84a70b469d: Status 404 returned error can't find the container with id 2317ad1df3d19116511a2afc21ce921309107a1b1d591f339f2f1e84a70b469d Feb 19 15:18:06 crc kubenswrapper[4861]: I0219 15:18:06.782762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" event={"ID":"386cc4aa-29ac-48cf-9b15-5a3587a6245e","Type":"ContainerStarted","Data":"2317ad1df3d19116511a2afc21ce921309107a1b1d591f339f2f1e84a70b469d"} Feb 19 15:18:07 crc kubenswrapper[4861]: I0219 15:18:07.801900 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" event={"ID":"386cc4aa-29ac-48cf-9b15-5a3587a6245e","Type":"ContainerStarted","Data":"f577be45e0b33ac93d0b77cbb167c8d8113c0403ef03cecb4ebba99f0acbdc47"} Feb 19 15:18:07 crc kubenswrapper[4861]: I0219 15:18:07.848665 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" podStartSLOduration=2.245500005 podStartE2EDuration="2.848631691s" podCreationTimestamp="2026-02-19 15:18:05 +0000 UTC" firstStartedPulling="2026-02-19 15:18:06.663225769 +0000 UTC m=+7701.324329007" lastFinishedPulling="2026-02-19 15:18:07.266357425 +0000 UTC m=+7701.927460693" observedRunningTime="2026-02-19 15:18:07.823331922 +0000 UTC m=+7702.484435220" watchObservedRunningTime="2026-02-19 15:18:07.848631691 +0000 UTC m=+7702.509734959" Feb 19 15:18:18 crc kubenswrapper[4861]: I0219 15:18:18.978082 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:18:18 crc kubenswrapper[4861]: E0219 15:18:18.978904 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:18:31 crc kubenswrapper[4861]: I0219 15:18:31.977412 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:18:31 crc kubenswrapper[4861]: E0219 15:18:31.978201 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:18:45 crc kubenswrapper[4861]: I0219 15:18:45.979043 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:18:45 crc kubenswrapper[4861]: E0219 15:18:45.980166 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:18:55 crc kubenswrapper[4861]: I0219 15:18:55.357621 4861 generic.go:334] "Generic (PLEG): container finished" podID="386cc4aa-29ac-48cf-9b15-5a3587a6245e" containerID="f577be45e0b33ac93d0b77cbb167c8d8113c0403ef03cecb4ebba99f0acbdc47" exitCode=0 Feb 19 15:18:55 crc kubenswrapper[4861]: I0219 15:18:55.357709 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" event={"ID":"386cc4aa-29ac-48cf-9b15-5a3587a6245e","Type":"ContainerDied","Data":"f577be45e0b33ac93d0b77cbb167c8d8113c0403ef03cecb4ebba99f0acbdc47"} Feb 19 15:18:56 crc kubenswrapper[4861]: I0219 15:18:56.945277 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:56 crc kubenswrapper[4861]: I0219 15:18:56.977636 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4jb\" (UniqueName: \"kubernetes.io/projected/386cc4aa-29ac-48cf-9b15-5a3587a6245e-kube-api-access-vj4jb\") pod \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " Feb 19 15:18:56 crc kubenswrapper[4861]: I0219 15:18:56.977891 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-inventory\") pod \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " Feb 19 15:18:56 crc kubenswrapper[4861]: I0219 15:18:56.977938 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-ssh-key-openstack-cell1\") pod \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\" (UID: \"386cc4aa-29ac-48cf-9b15-5a3587a6245e\") " Feb 19 15:18:56 crc kubenswrapper[4861]: I0219 15:18:56.998296 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386cc4aa-29ac-48cf-9b15-5a3587a6245e-kube-api-access-vj4jb" (OuterVolumeSpecName: "kube-api-access-vj4jb") pod "386cc4aa-29ac-48cf-9b15-5a3587a6245e" (UID: "386cc4aa-29ac-48cf-9b15-5a3587a6245e"). InnerVolumeSpecName "kube-api-access-vj4jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.015681 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "386cc4aa-29ac-48cf-9b15-5a3587a6245e" (UID: "386cc4aa-29ac-48cf-9b15-5a3587a6245e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.026887 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-inventory" (OuterVolumeSpecName: "inventory") pod "386cc4aa-29ac-48cf-9b15-5a3587a6245e" (UID: "386cc4aa-29ac-48cf-9b15-5a3587a6245e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.081055 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.082697 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386cc4aa-29ac-48cf-9b15-5a3587a6245e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.082791 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj4jb\" (UniqueName: \"kubernetes.io/projected/386cc4aa-29ac-48cf-9b15-5a3587a6245e-kube-api-access-vj4jb\") on node \"crc\" DevicePath \"\"" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.382832 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" event={"ID":"386cc4aa-29ac-48cf-9b15-5a3587a6245e","Type":"ContainerDied","Data":"2317ad1df3d19116511a2afc21ce921309107a1b1d591f339f2f1e84a70b469d"} Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.383367 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2317ad1df3d19116511a2afc21ce921309107a1b1d591f339f2f1e84a70b469d" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.382945 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cgfm6" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.516125 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-hzg8p"] Feb 19 15:18:57 crc kubenswrapper[4861]: E0219 15:18:57.517539 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386cc4aa-29ac-48cf-9b15-5a3587a6245e" containerName="configure-os-openstack-openstack-cell1" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.517559 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="386cc4aa-29ac-48cf-9b15-5a3587a6245e" containerName="configure-os-openstack-openstack-cell1" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.518095 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="386cc4aa-29ac-48cf-9b15-5a3587a6245e" containerName="configure-os-openstack-openstack-cell1" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.519389 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.530053 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.530172 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.530827 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.531073 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.532907 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-hzg8p"] Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.605073 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.605303 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks5pl\" (UniqueName: \"kubernetes.io/projected/1a9e753d-5b08-4abd-bc4e-34abb283079d-kube-api-access-ks5pl\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.605438 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-inventory-0\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.707903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks5pl\" (UniqueName: \"kubernetes.io/projected/1a9e753d-5b08-4abd-bc4e-34abb283079d-kube-api-access-ks5pl\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.708169 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-inventory-0\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.708502 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.715017 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-inventory-0\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.715366 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.738015 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks5pl\" (UniqueName: \"kubernetes.io/projected/1a9e753d-5b08-4abd-bc4e-34abb283079d-kube-api-access-ks5pl\") pod \"ssh-known-hosts-openstack-hzg8p\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:57 crc kubenswrapper[4861]: I0219 15:18:57.850723 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:18:58 crc kubenswrapper[4861]: I0219 15:18:58.411758 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-hzg8p"] Feb 19 15:18:58 crc kubenswrapper[4861]: I0219 15:18:58.977480 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:18:58 crc kubenswrapper[4861]: E0219 15:18:58.977881 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:18:59 crc kubenswrapper[4861]: I0219 15:18:59.403136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-hzg8p" event={"ID":"1a9e753d-5b08-4abd-bc4e-34abb283079d","Type":"ContainerStarted","Data":"fd6705f0dce9fbe45e43844e5e50fb9a74e4acd1c432f2b79220f2425c37a030"} Feb 19 15:18:59 crc kubenswrapper[4861]: I0219 15:18:59.403541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-hzg8p" event={"ID":"1a9e753d-5b08-4abd-bc4e-34abb283079d","Type":"ContainerStarted","Data":"7a8c202905a58a9743995b9fcf745e63cc0ba381a193881bf8b6420443aabafe"} Feb 19 15:18:59 crc kubenswrapper[4861]: I0219 15:18:59.436287 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-hzg8p" podStartSLOduration=1.926844481 podStartE2EDuration="2.436256971s" podCreationTimestamp="2026-02-19 15:18:57 +0000 UTC" firstStartedPulling="2026-02-19 15:18:58.419317521 +0000 UTC m=+7753.080420749" lastFinishedPulling="2026-02-19 15:18:58.928730001 +0000 UTC m=+7753.589833239" observedRunningTime="2026-02-19 15:18:59.426684288 +0000 UTC m=+7754.087787526" watchObservedRunningTime="2026-02-19 15:18:59.436256971 +0000 UTC m=+7754.097360209" Feb 19 15:19:08 crc kubenswrapper[4861]: I0219 15:19:08.501610 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a9e753d-5b08-4abd-bc4e-34abb283079d" containerID="fd6705f0dce9fbe45e43844e5e50fb9a74e4acd1c432f2b79220f2425c37a030" exitCode=0 Feb 19 15:19:08 crc kubenswrapper[4861]: I0219 15:19:08.502222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-hzg8p" event={"ID":"1a9e753d-5b08-4abd-bc4e-34abb283079d","Type":"ContainerDied","Data":"fd6705f0dce9fbe45e43844e5e50fb9a74e4acd1c432f2b79220f2425c37a030"} Feb 19 15:19:09 crc kubenswrapper[4861]: I0219 15:19:09.978161 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.094205 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.208155 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-inventory-0\") pod \"1a9e753d-5b08-4abd-bc4e-34abb283079d\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.208657 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-ssh-key-openstack-cell1\") pod \"1a9e753d-5b08-4abd-bc4e-34abb283079d\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.208745 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks5pl\" (UniqueName: \"kubernetes.io/projected/1a9e753d-5b08-4abd-bc4e-34abb283079d-kube-api-access-ks5pl\") pod \"1a9e753d-5b08-4abd-bc4e-34abb283079d\" (UID: \"1a9e753d-5b08-4abd-bc4e-34abb283079d\") " Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.213853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9e753d-5b08-4abd-bc4e-34abb283079d-kube-api-access-ks5pl" (OuterVolumeSpecName: "kube-api-access-ks5pl") pod "1a9e753d-5b08-4abd-bc4e-34abb283079d" (UID: "1a9e753d-5b08-4abd-bc4e-34abb283079d"). InnerVolumeSpecName "kube-api-access-ks5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.248669 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1a9e753d-5b08-4abd-bc4e-34abb283079d" (UID: "1a9e753d-5b08-4abd-bc4e-34abb283079d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.260408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1a9e753d-5b08-4abd-bc4e-34abb283079d" (UID: "1a9e753d-5b08-4abd-bc4e-34abb283079d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.310782 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.310809 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks5pl\" (UniqueName: \"kubernetes.io/projected/1a9e753d-5b08-4abd-bc4e-34abb283079d-kube-api-access-ks5pl\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.310819 4861 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1a9e753d-5b08-4abd-bc4e-34abb283079d-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.533437 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-hzg8p" event={"ID":"1a9e753d-5b08-4abd-bc4e-34abb283079d","Type":"ContainerDied","Data":"7a8c202905a58a9743995b9fcf745e63cc0ba381a193881bf8b6420443aabafe"} Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.533502 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8c202905a58a9743995b9fcf745e63cc0ba381a193881bf8b6420443aabafe" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.533958 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-hzg8p" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.538244 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"cdf65bcd9b9cdd4f48ea53c764224ffd95c410add39c2dd90b7c23f20db59da3"} Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.674963 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-w9gq5"] Feb 19 15:19:10 crc kubenswrapper[4861]: E0219 15:19:10.675574 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e753d-5b08-4abd-bc4e-34abb283079d" containerName="ssh-known-hosts-openstack" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.675592 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e753d-5b08-4abd-bc4e-34abb283079d" containerName="ssh-known-hosts-openstack" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.675884 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9e753d-5b08-4abd-bc4e-34abb283079d" containerName="ssh-known-hosts-openstack" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.676860 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.678830 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.679324 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.679527 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.679680 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.686764 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-w9gq5"] Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.824522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.824590 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8j6b\" (UniqueName: \"kubernetes.io/projected/4ec4d380-cfbd-4b5a-a17a-d9815018d088-kube-api-access-x8j6b\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.824624 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-inventory\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.927179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.927273 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8j6b\" (UniqueName: \"kubernetes.io/projected/4ec4d380-cfbd-4b5a-a17a-d9815018d088-kube-api-access-x8j6b\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.927313 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-inventory\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.937224 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.937361 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-inventory\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.957128 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8j6b\" (UniqueName: \"kubernetes.io/projected/4ec4d380-cfbd-4b5a-a17a-d9815018d088-kube-api-access-x8j6b\") pod \"run-os-openstack-openstack-cell1-w9gq5\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:10 crc kubenswrapper[4861]: I0219 15:19:10.992658 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:11 crc kubenswrapper[4861]: I0219 15:19:11.632185 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-w9gq5"] Feb 19 15:19:12 crc kubenswrapper[4861]: I0219 15:19:12.562571 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" event={"ID":"4ec4d380-cfbd-4b5a-a17a-d9815018d088","Type":"ContainerStarted","Data":"f3ff9022407c10a08a9c4f7d190d5cebf1855b387492eb0c091660390f176596"} Feb 19 15:19:12 crc kubenswrapper[4861]: I0219 15:19:12.564087 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" event={"ID":"4ec4d380-cfbd-4b5a-a17a-d9815018d088","Type":"ContainerStarted","Data":"d366ade6c2d14c0fb60ffeadd49597750895f773b2ccbcdebece27963cbf63a3"} Feb 19 15:19:12 crc kubenswrapper[4861]: I0219 15:19:12.592161 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" podStartSLOduration=2.119653311 podStartE2EDuration="2.592135533s" podCreationTimestamp="2026-02-19 15:19:10 +0000 UTC" firstStartedPulling="2026-02-19 15:19:11.625970067 +0000 UTC m=+7766.287073295" lastFinishedPulling="2026-02-19 15:19:12.098452289 +0000 UTC m=+7766.759555517" observedRunningTime="2026-02-19 15:19:12.583761702 +0000 UTC m=+7767.244864960" watchObservedRunningTime="2026-02-19 15:19:12.592135533 +0000 UTC m=+7767.253238791" Feb 19 15:19:20 crc kubenswrapper[4861]: I0219 15:19:20.648938 4861 generic.go:334] "Generic (PLEG): container finished" podID="4ec4d380-cfbd-4b5a-a17a-d9815018d088" containerID="f3ff9022407c10a08a9c4f7d190d5cebf1855b387492eb0c091660390f176596" exitCode=0 Feb 19 15:19:20 crc kubenswrapper[4861]: I0219 15:19:20.649014 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" event={"ID":"4ec4d380-cfbd-4b5a-a17a-d9815018d088","Type":"ContainerDied","Data":"f3ff9022407c10a08a9c4f7d190d5cebf1855b387492eb0c091660390f176596"} Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.178889 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.306914 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-ssh-key-openstack-cell1\") pod \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.307051 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-inventory\") pod \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.307127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8j6b\" (UniqueName: \"kubernetes.io/projected/4ec4d380-cfbd-4b5a-a17a-d9815018d088-kube-api-access-x8j6b\") pod \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\" (UID: \"4ec4d380-cfbd-4b5a-a17a-d9815018d088\") " Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.319270 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec4d380-cfbd-4b5a-a17a-d9815018d088-kube-api-access-x8j6b" (OuterVolumeSpecName: "kube-api-access-x8j6b") pod "4ec4d380-cfbd-4b5a-a17a-d9815018d088" (UID: "4ec4d380-cfbd-4b5a-a17a-d9815018d088"). InnerVolumeSpecName "kube-api-access-x8j6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.348692 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-inventory" (OuterVolumeSpecName: "inventory") pod "4ec4d380-cfbd-4b5a-a17a-d9815018d088" (UID: "4ec4d380-cfbd-4b5a-a17a-d9815018d088"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.362817 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4ec4d380-cfbd-4b5a-a17a-d9815018d088" (UID: "4ec4d380-cfbd-4b5a-a17a-d9815018d088"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.410956 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.411011 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8j6b\" (UniqueName: \"kubernetes.io/projected/4ec4d380-cfbd-4b5a-a17a-d9815018d088-kube-api-access-x8j6b\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.411032 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ec4d380-cfbd-4b5a-a17a-d9815018d088-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.677571 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" event={"ID":"4ec4d380-cfbd-4b5a-a17a-d9815018d088","Type":"ContainerDied","Data":"d366ade6c2d14c0fb60ffeadd49597750895f773b2ccbcdebece27963cbf63a3"} Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.677628 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-w9gq5" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.677631 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d366ade6c2d14c0fb60ffeadd49597750895f773b2ccbcdebece27963cbf63a3" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.845961 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-h2jd9"] Feb 19 15:19:22 crc kubenswrapper[4861]: E0219 15:19:22.846644 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec4d380-cfbd-4b5a-a17a-d9815018d088" containerName="run-os-openstack-openstack-cell1" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.846667 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec4d380-cfbd-4b5a-a17a-d9815018d088" containerName="run-os-openstack-openstack-cell1" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.847130 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec4d380-cfbd-4b5a-a17a-d9815018d088" containerName="run-os-openstack-openstack-cell1" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.848159 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.896128 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.896198 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-h2jd9"] Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.896350 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.896739 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:19:22 crc kubenswrapper[4861]: I0219 15:19:22.897207 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.032741 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-inventory\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.033273 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cs8\" (UniqueName: \"kubernetes.io/projected/f2663c4e-e513-465a-9758-122b6b2d63fb-kube-api-access-q4cs8\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.034140 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.136673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cs8\" (UniqueName: \"kubernetes.io/projected/f2663c4e-e513-465a-9758-122b6b2d63fb-kube-api-access-q4cs8\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.137014 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.137068 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-inventory\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.141882 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-inventory\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.143065 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.161378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cs8\" (UniqueName: \"kubernetes.io/projected/f2663c4e-e513-465a-9758-122b6b2d63fb-kube-api-access-q4cs8\") pod \"reboot-os-openstack-openstack-cell1-h2jd9\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.207002 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.600491 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-h2jd9"] Feb 19 15:19:23 crc kubenswrapper[4861]: W0219 15:19:23.608178 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2663c4e_e513_465a_9758_122b6b2d63fb.slice/crio-ea18a239fd74923f5f90db2552d907cbf438c4310e2d40cf129f790a73f383e0 WatchSource:0}: Error finding container ea18a239fd74923f5f90db2552d907cbf438c4310e2d40cf129f790a73f383e0: Status 404 returned error can't find the container with id ea18a239fd74923f5f90db2552d907cbf438c4310e2d40cf129f790a73f383e0 Feb 19 15:19:23 crc kubenswrapper[4861]: I0219 15:19:23.690190 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" event={"ID":"f2663c4e-e513-465a-9758-122b6b2d63fb","Type":"ContainerStarted","Data":"ea18a239fd74923f5f90db2552d907cbf438c4310e2d40cf129f790a73f383e0"} Feb 19 15:19:24 crc kubenswrapper[4861]: I0219 15:19:24.702185 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" event={"ID":"f2663c4e-e513-465a-9758-122b6b2d63fb","Type":"ContainerStarted","Data":"8a4b7405123f66103a15d469f9909cfeaca621e88041d4400e89268ddeca8d55"} Feb 19 15:19:24 crc kubenswrapper[4861]: I0219 15:19:24.727665 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" podStartSLOduration=2.300743418 podStartE2EDuration="2.727636956s" podCreationTimestamp="2026-02-19 15:19:22 +0000 UTC" firstStartedPulling="2026-02-19 15:19:23.617091522 +0000 UTC m=+7778.278194760" lastFinishedPulling="2026-02-19 15:19:24.04398503 +0000 UTC m=+7778.705088298" observedRunningTime="2026-02-19 15:19:24.719939322 +0000 UTC m=+7779.381042580" watchObservedRunningTime="2026-02-19 15:19:24.727636956 +0000 UTC m=+7779.388740214" Feb 19 15:19:40 crc kubenswrapper[4861]: I0219 15:19:40.900941 4861 generic.go:334] "Generic (PLEG): container finished" podID="f2663c4e-e513-465a-9758-122b6b2d63fb" containerID="8a4b7405123f66103a15d469f9909cfeaca621e88041d4400e89268ddeca8d55" exitCode=0 Feb 19 15:19:40 crc kubenswrapper[4861]: I0219 15:19:40.901170 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" event={"ID":"f2663c4e-e513-465a-9758-122b6b2d63fb","Type":"ContainerDied","Data":"8a4b7405123f66103a15d469f9909cfeaca621e88041d4400e89268ddeca8d55"} Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.391518 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.543500 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4cs8\" (UniqueName: \"kubernetes.io/projected/f2663c4e-e513-465a-9758-122b6b2d63fb-kube-api-access-q4cs8\") pod \"f2663c4e-e513-465a-9758-122b6b2d63fb\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.543557 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-ssh-key-openstack-cell1\") pod \"f2663c4e-e513-465a-9758-122b6b2d63fb\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.543747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-inventory\") pod \"f2663c4e-e513-465a-9758-122b6b2d63fb\" (UID: \"f2663c4e-e513-465a-9758-122b6b2d63fb\") " Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.555486 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2663c4e-e513-465a-9758-122b6b2d63fb-kube-api-access-q4cs8" (OuterVolumeSpecName: "kube-api-access-q4cs8") pod "f2663c4e-e513-465a-9758-122b6b2d63fb" (UID: "f2663c4e-e513-465a-9758-122b6b2d63fb"). InnerVolumeSpecName "kube-api-access-q4cs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.584088 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-inventory" (OuterVolumeSpecName: "inventory") pod "f2663c4e-e513-465a-9758-122b6b2d63fb" (UID: "f2663c4e-e513-465a-9758-122b6b2d63fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.591018 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f2663c4e-e513-465a-9758-122b6b2d63fb" (UID: "f2663c4e-e513-465a-9758-122b6b2d63fb"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.646057 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.646112 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4cs8\" (UniqueName: \"kubernetes.io/projected/f2663c4e-e513-465a-9758-122b6b2d63fb-kube-api-access-q4cs8\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.646133 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f2663c4e-e513-465a-9758-122b6b2d63fb-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.924693 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" event={"ID":"f2663c4e-e513-465a-9758-122b6b2d63fb","Type":"ContainerDied","Data":"ea18a239fd74923f5f90db2552d907cbf438c4310e2d40cf129f790a73f383e0"} Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.924739 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea18a239fd74923f5f90db2552d907cbf438c4310e2d40cf129f790a73f383e0" Feb 19 15:19:42 crc kubenswrapper[4861]: I0219 15:19:42.924828 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-h2jd9" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.092288 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-skflh"] Feb 19 15:19:43 crc kubenswrapper[4861]: E0219 15:19:43.092733 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2663c4e-e513-465a-9758-122b6b2d63fb" containerName="reboot-os-openstack-openstack-cell1" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.092750 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2663c4e-e513-465a-9758-122b6b2d63fb" containerName="reboot-os-openstack-openstack-cell1" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.092949 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2663c4e-e513-465a-9758-122b6b2d63fb" containerName="reboot-os-openstack-openstack-cell1" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.093702 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.100993 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.101006 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.101178 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.101373 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.101387 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.101530 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.101774 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.103328 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.117288 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-skflh"] Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259021 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259115 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x47b2\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-kube-api-access-x47b2\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259152 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259477 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259544 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259673 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259903 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-inventory\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.259973 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.260108 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.260305 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.260368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.260445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362287 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362350 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362380 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47b2\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-kube-api-access-x47b2\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362504 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-inventory\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362884 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362933 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362958 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.362993 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.367579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.367782 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.369296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.369392 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.369698 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-inventory\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.369802 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.369863 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.370259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.370638 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.373045 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.373390 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.374950 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.375742 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.380169 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47b2\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-kube-api-access-x47b2\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.383377 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-skflh\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.410270 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:19:43 crc kubenswrapper[4861]: I0219 15:19:43.974887 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-skflh"] Feb 19 15:19:44 crc kubenswrapper[4861]: I0219 15:19:44.951234 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-skflh" event={"ID":"9e32920d-251a-4eea-9ef5-db5f4aad9ecd","Type":"ContainerStarted","Data":"a5308cfebe4fdc76544244a57ca3d7a015b1c270d24c09060a2ae72856cb1b60"} Feb 19 15:19:46 crc kubenswrapper[4861]: I0219 15:19:46.007649 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-skflh" event={"ID":"9e32920d-251a-4eea-9ef5-db5f4aad9ecd","Type":"ContainerStarted","Data":"9c7f9b6ab2b5d252213d8ade31bf06ed4082195524352b6a9a01f7edbd54bfcb"} Feb 19 15:19:46 crc kubenswrapper[4861]: I0219 15:19:46.053668 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-skflh" podStartSLOduration=2.397805392 podStartE2EDuration="3.053650443s" podCreationTimestamp="2026-02-19 15:19:43 +0000 UTC" firstStartedPulling="2026-02-19 15:19:43.988033767 +0000 UTC m=+7798.649136985" lastFinishedPulling="2026-02-19 15:19:44.643878798 +0000 UTC m=+7799.304982036" observedRunningTime="2026-02-19 15:19:46.043993918 +0000 UTC m=+7800.705097166" watchObservedRunningTime="2026-02-19 15:19:46.053650443 +0000 UTC m=+7800.714753671" Feb 19 15:20:24 crc kubenswrapper[4861]: I0219 15:20:24.449061 4861 generic.go:334] "Generic (PLEG): container finished" podID="9e32920d-251a-4eea-9ef5-db5f4aad9ecd" containerID="9c7f9b6ab2b5d252213d8ade31bf06ed4082195524352b6a9a01f7edbd54bfcb" exitCode=0 Feb 19 15:20:24 crc kubenswrapper[4861]: I0219 15:20:24.449132 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-skflh" event={"ID":"9e32920d-251a-4eea-9ef5-db5f4aad9ecd","Type":"ContainerDied","Data":"9c7f9b6ab2b5d252213d8ade31bf06ed4082195524352b6a9a01f7edbd54bfcb"} Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.052458 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189192 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ovn-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189242 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ssh-key-openstack-cell1\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189279 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-nova-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189302 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-inventory\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189332 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-sriov-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189358 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x47b2\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-kube-api-access-x47b2\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189389 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-bootstrap-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189455 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-metadata-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189509 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-dhcp-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189553 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-ovn-default-certs-0\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189607 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-telemetry-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189633 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-libvirt-default-certs-0\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189657 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-libvirt-combined-ca-bundle\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189690 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-neutron-metadata-default-certs-0\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.189737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-telemetry-default-certs-0\") pod \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\" (UID: \"9e32920d-251a-4eea-9ef5-db5f4aad9ecd\") " Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.198350 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.198547 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-kube-api-access-x47b2" (OuterVolumeSpecName: "kube-api-access-x47b2") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "kube-api-access-x47b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.198885 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.198948 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.199634 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.199980 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.200222 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.200410 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.201220 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.201247 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.201403 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.202067 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.204189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.232406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-inventory" (OuterVolumeSpecName: "inventory") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.236936 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9e32920d-251a-4eea-9ef5-db5f4aad9ecd" (UID: "9e32920d-251a-4eea-9ef5-db5f4aad9ecd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292601 4861 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292645 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292659 4861 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292674 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292687 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292698 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292709 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292720 4861 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292732 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292743 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292754 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x47b2\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-kube-api-access-x47b2\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292770 4861 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292780 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292792 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.292805 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9e32920d-251a-4eea-9ef5-db5f4aad9ecd-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.470003 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-skflh" event={"ID":"9e32920d-251a-4eea-9ef5-db5f4aad9ecd","Type":"ContainerDied","Data":"a5308cfebe4fdc76544244a57ca3d7a015b1c270d24c09060a2ae72856cb1b60"} Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.470040 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5308cfebe4fdc76544244a57ca3d7a015b1c270d24c09060a2ae72856cb1b60" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.470044 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-skflh" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.651827 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-7ptwt"] Feb 19 15:20:26 crc kubenswrapper[4861]: E0219 15:20:26.652353 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e32920d-251a-4eea-9ef5-db5f4aad9ecd" containerName="install-certs-openstack-openstack-cell1" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.652376 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e32920d-251a-4eea-9ef5-db5f4aad9ecd" containerName="install-certs-openstack-openstack-cell1" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.652626 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e32920d-251a-4eea-9ef5-db5f4aad9ecd" containerName="install-certs-openstack-openstack-cell1" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.653364 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.655929 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.656077 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.656213 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.656321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.656455 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.666735 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-7ptwt"] Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.803545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.803810 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.803860 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqg7c\" (UniqueName: \"kubernetes.io/projected/a30771c6-fe90-4bb6-b97f-f7f2df485087-kube-api-access-pqg7c\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.803983 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.804179 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-inventory\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.906843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-inventory\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.906922 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.907032 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.907060 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqg7c\" (UniqueName: \"kubernetes.io/projected/a30771c6-fe90-4bb6-b97f-f7f2df485087-kube-api-access-pqg7c\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.907128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.909018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.911353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.911488 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.912543 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-inventory\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.933483 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqg7c\" (UniqueName: \"kubernetes.io/projected/a30771c6-fe90-4bb6-b97f-f7f2df485087-kube-api-access-pqg7c\") pod \"ovn-openstack-openstack-cell1-7ptwt\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:26 crc kubenswrapper[4861]: I0219 15:20:26.984150 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:20:27 crc kubenswrapper[4861]: I0219 15:20:27.561356 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-7ptwt"] Feb 19 15:20:28 crc kubenswrapper[4861]: I0219 15:20:28.500596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" event={"ID":"a30771c6-fe90-4bb6-b97f-f7f2df485087","Type":"ContainerStarted","Data":"f36ab9795629cf662d6382283ce840182e23b9c51779849370dfd10a3ad1b3d1"} Feb 19 15:20:28 crc kubenswrapper[4861]: I0219 15:20:28.501276 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" event={"ID":"a30771c6-fe90-4bb6-b97f-f7f2df485087","Type":"ContainerStarted","Data":"e85b1d0c958ee1042a55635be768c662ffaaa30c15665fa3460ef2d9746f7a02"} Feb 19 15:20:28 crc kubenswrapper[4861]: I0219 15:20:28.523816 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" podStartSLOduration=2.098238456 podStartE2EDuration="2.523797238s" podCreationTimestamp="2026-02-19 15:20:26 +0000 UTC" firstStartedPulling="2026-02-19 15:20:27.574094858 +0000 UTC m=+7842.235198086" lastFinishedPulling="2026-02-19 15:20:27.99965364 +0000 UTC m=+7842.660756868" observedRunningTime="2026-02-19 15:20:28.520241715 +0000 UTC m=+7843.181344973" watchObservedRunningTime="2026-02-19 15:20:28.523797238 +0000 UTC m=+7843.184900456" Feb 19 15:21:33 crc kubenswrapper[4861]: I0219 15:21:33.307768 4861 generic.go:334] "Generic (PLEG): container finished" podID="a30771c6-fe90-4bb6-b97f-f7f2df485087" containerID="f36ab9795629cf662d6382283ce840182e23b9c51779849370dfd10a3ad1b3d1" exitCode=0 Feb 19 15:21:33 crc kubenswrapper[4861]: I0219 15:21:33.308509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" event={"ID":"a30771c6-fe90-4bb6-b97f-f7f2df485087","Type":"ContainerDied","Data":"f36ab9795629cf662d6382283ce840182e23b9c51779849370dfd10a3ad1b3d1"} Feb 19 15:21:33 crc kubenswrapper[4861]: I0219 15:21:33.834407 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:21:33 crc kubenswrapper[4861]: I0219 15:21:33.834820 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.895499 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.976009 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-inventory\") pod \"a30771c6-fe90-4bb6-b97f-f7f2df485087\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.976151 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovn-combined-ca-bundle\") pod \"a30771c6-fe90-4bb6-b97f-f7f2df485087\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.976295 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ssh-key-openstack-cell1\") pod \"a30771c6-fe90-4bb6-b97f-f7f2df485087\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.976326 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovncontroller-config-0\") pod \"a30771c6-fe90-4bb6-b97f-f7f2df485087\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.976394 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqg7c\" (UniqueName: \"kubernetes.io/projected/a30771c6-fe90-4bb6-b97f-f7f2df485087-kube-api-access-pqg7c\") pod \"a30771c6-fe90-4bb6-b97f-f7f2df485087\" (UID: \"a30771c6-fe90-4bb6-b97f-f7f2df485087\") " Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.982018 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a30771c6-fe90-4bb6-b97f-f7f2df485087" (UID: "a30771c6-fe90-4bb6-b97f-f7f2df485087"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:21:34 crc kubenswrapper[4861]: I0219 15:21:34.982790 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30771c6-fe90-4bb6-b97f-f7f2df485087-kube-api-access-pqg7c" (OuterVolumeSpecName: "kube-api-access-pqg7c") pod "a30771c6-fe90-4bb6-b97f-f7f2df485087" (UID: "a30771c6-fe90-4bb6-b97f-f7f2df485087"). InnerVolumeSpecName "kube-api-access-pqg7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.007566 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-inventory" (OuterVolumeSpecName: "inventory") pod "a30771c6-fe90-4bb6-b97f-f7f2df485087" (UID: "a30771c6-fe90-4bb6-b97f-f7f2df485087"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.027652 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a30771c6-fe90-4bb6-b97f-f7f2df485087" (UID: "a30771c6-fe90-4bb6-b97f-f7f2df485087"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.034584 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "a30771c6-fe90-4bb6-b97f-f7f2df485087" (UID: "a30771c6-fe90-4bb6-b97f-f7f2df485087"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.080621 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqg7c\" (UniqueName: \"kubernetes.io/projected/a30771c6-fe90-4bb6-b97f-f7f2df485087-kube-api-access-pqg7c\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.081156 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.081172 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.081184 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a30771c6-fe90-4bb6-b97f-f7f2df485087-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.081196 4861 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a30771c6-fe90-4bb6-b97f-f7f2df485087-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.342742 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" event={"ID":"a30771c6-fe90-4bb6-b97f-f7f2df485087","Type":"ContainerDied","Data":"e85b1d0c958ee1042a55635be768c662ffaaa30c15665fa3460ef2d9746f7a02"} Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.342800 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85b1d0c958ee1042a55635be768c662ffaaa30c15665fa3460ef2d9746f7a02" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.342884 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7ptwt" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.496812 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-xzqsm"] Feb 19 15:21:35 crc kubenswrapper[4861]: E0219 15:21:35.497303 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30771c6-fe90-4bb6-b97f-f7f2df485087" containerName="ovn-openstack-openstack-cell1" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.497322 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30771c6-fe90-4bb6-b97f-f7f2df485087" containerName="ovn-openstack-openstack-cell1" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.497590 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30771c6-fe90-4bb6-b97f-f7f2df485087" containerName="ovn-openstack-openstack-cell1" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.498460 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.502282 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.502580 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.507249 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.507535 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.507739 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.508492 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.527363 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-xzqsm"] Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.593566 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.593713 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.593763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.593783 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.593832 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.593862 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7xj\" (UniqueName: \"kubernetes.io/projected/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-kube-api-access-gd7xj\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.695163 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.695210 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.695250 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.695311 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7xj\" (UniqueName: \"kubernetes.io/projected/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-kube-api-access-gd7xj\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.695340 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.696174 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.701175 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.701608 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.706029 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.712056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.714922 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.719769 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7xj\" (UniqueName: \"kubernetes.io/projected/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-kube-api-access-gd7xj\") pod \"neutron-metadata-openstack-openstack-cell1-xzqsm\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:35 crc kubenswrapper[4861]: I0219 15:21:35.828145 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:21:36 crc kubenswrapper[4861]: I0219 15:21:36.394085 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-xzqsm"] Feb 19 15:21:36 crc kubenswrapper[4861]: I0219 15:21:36.400482 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:21:37 crc kubenswrapper[4861]: I0219 15:21:37.366891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" event={"ID":"7c70f2cb-04ad-4844-ac5b-262e93aca1e7","Type":"ContainerStarted","Data":"aeb6898a59b91a30de9feeccb4d15614428a7f77e21b507fe118a47e5524d0d9"} Feb 19 15:21:37 crc kubenswrapper[4861]: I0219 15:21:37.367362 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" event={"ID":"7c70f2cb-04ad-4844-ac5b-262e93aca1e7","Type":"ContainerStarted","Data":"7fc2b3f04f68eb4ab284fd7305d8d25f0c0bbb089d68c05e5b5846ed7e698906"} Feb 19 15:21:37 crc kubenswrapper[4861]: I0219 15:21:37.402873 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" podStartSLOduration=1.863344442 podStartE2EDuration="2.402847428s" podCreationTimestamp="2026-02-19 15:21:35 +0000 UTC" firstStartedPulling="2026-02-19 15:21:36.400252139 +0000 UTC m=+7911.061355357" lastFinishedPulling="2026-02-19 15:21:36.939755095 +0000 UTC m=+7911.600858343" observedRunningTime="2026-02-19 15:21:37.398562825 +0000 UTC m=+7912.059666083" watchObservedRunningTime="2026-02-19 15:21:37.402847428 +0000 UTC m=+7912.063950696" Feb 19 15:22:03 crc kubenswrapper[4861]: I0219 15:22:03.834197 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:22:03 crc kubenswrapper[4861]: I0219 15:22:03.834870 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:22:32 crc kubenswrapper[4861]: I0219 15:22:32.138279 4861 generic.go:334] "Generic (PLEG): container finished" podID="7c70f2cb-04ad-4844-ac5b-262e93aca1e7" containerID="aeb6898a59b91a30de9feeccb4d15614428a7f77e21b507fe118a47e5524d0d9" exitCode=0 Feb 19 15:22:32 crc kubenswrapper[4861]: I0219 15:22:32.138455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" event={"ID":"7c70f2cb-04ad-4844-ac5b-262e93aca1e7","Type":"ContainerDied","Data":"aeb6898a59b91a30de9feeccb4d15614428a7f77e21b507fe118a47e5524d0d9"} Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.748261 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.834011 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.834099 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.834163 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.835408 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdf65bcd9b9cdd4f48ea53c764224ffd95c410add39c2dd90b7c23f20db59da3"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.835537 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://cdf65bcd9b9cdd4f48ea53c764224ffd95c410add39c2dd90b7c23f20db59da3" gracePeriod=600 Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.850114 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-ssh-key-openstack-cell1\") pod \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.850447 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-inventory\") pod \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.850503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-metadata-combined-ca-bundle\") pod \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.850543 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-nova-metadata-neutron-config-0\") pod \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.850641 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.850687 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7xj\" (UniqueName: \"kubernetes.io/projected/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-kube-api-access-gd7xj\") pod \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\" (UID: \"7c70f2cb-04ad-4844-ac5b-262e93aca1e7\") " Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.862076 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-kube-api-access-gd7xj" (OuterVolumeSpecName: "kube-api-access-gd7xj") pod "7c70f2cb-04ad-4844-ac5b-262e93aca1e7" (UID: "7c70f2cb-04ad-4844-ac5b-262e93aca1e7"). InnerVolumeSpecName "kube-api-access-gd7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.870025 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7c70f2cb-04ad-4844-ac5b-262e93aca1e7" (UID: "7c70f2cb-04ad-4844-ac5b-262e93aca1e7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.888232 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-inventory" (OuterVolumeSpecName: "inventory") pod "7c70f2cb-04ad-4844-ac5b-262e93aca1e7" (UID: "7c70f2cb-04ad-4844-ac5b-262e93aca1e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.888758 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7c70f2cb-04ad-4844-ac5b-262e93aca1e7" (UID: "7c70f2cb-04ad-4844-ac5b-262e93aca1e7"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.896037 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7c70f2cb-04ad-4844-ac5b-262e93aca1e7" (UID: "7c70f2cb-04ad-4844-ac5b-262e93aca1e7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.901400 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7c70f2cb-04ad-4844-ac5b-262e93aca1e7" (UID: "7c70f2cb-04ad-4844-ac5b-262e93aca1e7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.953395 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.953480 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.953503 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.953525 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.953548 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7xj\" (UniqueName: \"kubernetes.io/projected/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-kube-api-access-gd7xj\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:33 crc kubenswrapper[4861]: I0219 15:22:33.953601 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c70f2cb-04ad-4844-ac5b-262e93aca1e7-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.170139 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="cdf65bcd9b9cdd4f48ea53c764224ffd95c410add39c2dd90b7c23f20db59da3" exitCode=0 Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.170209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"cdf65bcd9b9cdd4f48ea53c764224ffd95c410add39c2dd90b7c23f20db59da3"} Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.170650 4861 scope.go:117] "RemoveContainer" containerID="2b191d90e4bd0f98b51177d6ba5f981a064ea98e9fc2a0839977be6798eb1452" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.174204 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" event={"ID":"7c70f2cb-04ad-4844-ac5b-262e93aca1e7","Type":"ContainerDied","Data":"7fc2b3f04f68eb4ab284fd7305d8d25f0c0bbb089d68c05e5b5846ed7e698906"} Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.174278 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc2b3f04f68eb4ab284fd7305d8d25f0c0bbb089d68c05e5b5846ed7e698906" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.174298 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xzqsm" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.295245 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-klvns"] Feb 19 15:22:34 crc kubenswrapper[4861]: E0219 15:22:34.295858 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c70f2cb-04ad-4844-ac5b-262e93aca1e7" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.295888 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c70f2cb-04ad-4844-ac5b-262e93aca1e7" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.296221 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c70f2cb-04ad-4844-ac5b-262e93aca1e7" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.297631 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.300900 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.300899 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.300995 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.301032 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.301216 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.303582 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-klvns"] Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.465376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-inventory\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.466004 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfrzs\" (UniqueName: \"kubernetes.io/projected/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-kube-api-access-xfrzs\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.466128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.466236 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.466512 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.568281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfrzs\" (UniqueName: \"kubernetes.io/projected/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-kube-api-access-xfrzs\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.568331 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.568364 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.568435 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.568527 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-inventory\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.573416 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-inventory\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.573521 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.573716 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.574542 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.590171 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfrzs\" (UniqueName: \"kubernetes.io/projected/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-kube-api-access-xfrzs\") pod \"libvirt-openstack-openstack-cell1-klvns\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:34 crc kubenswrapper[4861]: I0219 15:22:34.635652 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:22:35 crc kubenswrapper[4861]: I0219 15:22:35.190332 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193"} Feb 19 15:22:35 crc kubenswrapper[4861]: I0219 15:22:35.253288 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-klvns"] Feb 19 15:22:36 crc kubenswrapper[4861]: I0219 15:22:36.203604 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-klvns" event={"ID":"03e9e117-86ff-40f4-97c2-bbb611cd3cd9","Type":"ContainerStarted","Data":"662fb4090e66469355a2111c1a9314b93c0da573ba4945537aeabaf694f34c9e"} Feb 19 15:22:36 crc kubenswrapper[4861]: I0219 15:22:36.204065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-klvns" event={"ID":"03e9e117-86ff-40f4-97c2-bbb611cd3cd9","Type":"ContainerStarted","Data":"1666084be61965987089c9d09553fb84b92af12b2d2da8ef97c99e654d92fce7"} Feb 19 15:22:36 crc kubenswrapper[4861]: I0219 15:22:36.235142 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-klvns" podStartSLOduration=1.7987487130000002 podStartE2EDuration="2.235121642s" podCreationTimestamp="2026-02-19 15:22:34 +0000 UTC" firstStartedPulling="2026-02-19 15:22:35.250555495 +0000 UTC m=+7969.911658723" lastFinishedPulling="2026-02-19 15:22:35.686928414 +0000 UTC m=+7970.348031652" observedRunningTime="2026-02-19 15:22:36.22277689 +0000 UTC m=+7970.883880128" watchObservedRunningTime="2026-02-19 15:22:36.235121642 +0000 UTC m=+7970.896224880" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.246812 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmvz"] Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.249109 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.281466 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmvz"] Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.302782 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-catalog-content\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.302893 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwc5\" (UniqueName: \"kubernetes.io/projected/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-kube-api-access-kgwc5\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.303155 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-utilities\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.406311 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-catalog-content\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.405773 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-catalog-content\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.406493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwc5\" (UniqueName: \"kubernetes.io/projected/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-kube-api-access-kgwc5\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.406913 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-utilities\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.407330 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-utilities\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.426573 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwc5\" (UniqueName: \"kubernetes.io/projected/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-kube-api-access-kgwc5\") pod \"redhat-marketplace-rtmvz\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:14 crc kubenswrapper[4861]: I0219 15:24:14.588644 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:15 crc kubenswrapper[4861]: I0219 15:24:15.079181 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmvz"] Feb 19 15:24:15 crc kubenswrapper[4861]: I0219 15:24:15.350455 4861 generic.go:334] "Generic (PLEG): container finished" podID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerID="bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df" exitCode=0 Feb 19 15:24:15 crc kubenswrapper[4861]: I0219 15:24:15.350514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerDied","Data":"bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df"} Feb 19 15:24:15 crc kubenswrapper[4861]: I0219 15:24:15.351439 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerStarted","Data":"027cde4f8e93797203c53478f536a91dba08afb775d55d93b54920df08bd33b7"} Feb 19 15:24:16 crc kubenswrapper[4861]: I0219 15:24:16.363862 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerStarted","Data":"2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8"} Feb 19 15:24:17 crc kubenswrapper[4861]: I0219 15:24:17.383676 4861 generic.go:334] "Generic (PLEG): container finished" podID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerID="2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8" exitCode=0 Feb 19 15:24:17 crc kubenswrapper[4861]: I0219 15:24:17.384535 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerDied","Data":"2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8"} Feb 19 15:24:18 crc kubenswrapper[4861]: I0219 15:24:18.397934 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerStarted","Data":"12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14"} Feb 19 15:24:18 crc kubenswrapper[4861]: I0219 15:24:18.426476 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtmvz" podStartSLOduration=2.009446668 podStartE2EDuration="4.426453761s" podCreationTimestamp="2026-02-19 15:24:14 +0000 UTC" firstStartedPulling="2026-02-19 15:24:15.352704046 +0000 UTC m=+8070.013807274" lastFinishedPulling="2026-02-19 15:24:17.769711129 +0000 UTC m=+8072.430814367" observedRunningTime="2026-02-19 15:24:18.418717783 +0000 UTC m=+8073.079821031" watchObservedRunningTime="2026-02-19 15:24:18.426453761 +0000 UTC m=+8073.087556989" Feb 19 15:24:24 crc kubenswrapper[4861]: I0219 15:24:24.589648 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:24 crc kubenswrapper[4861]: I0219 15:24:24.590074 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:24 crc kubenswrapper[4861]: I0219 15:24:24.676059 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:25 crc kubenswrapper[4861]: I0219 15:24:25.566914 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:25 crc kubenswrapper[4861]: I0219 15:24:25.645058 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmvz"] Feb 19 15:24:27 crc kubenswrapper[4861]: I0219 15:24:27.505575 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtmvz" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="registry-server" containerID="cri-o://12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14" gracePeriod=2 Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.057769 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.141509 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgwc5\" (UniqueName: \"kubernetes.io/projected/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-kube-api-access-kgwc5\") pod \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.141930 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-catalog-content\") pod \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.141972 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-utilities\") pod \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\" (UID: \"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6\") " Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.143279 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-utilities" (OuterVolumeSpecName: "utilities") pod "08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" (UID: "08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.151048 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-kube-api-access-kgwc5" (OuterVolumeSpecName: "kube-api-access-kgwc5") pod "08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" (UID: "08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6"). InnerVolumeSpecName "kube-api-access-kgwc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.165918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" (UID: "08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.248116 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgwc5\" (UniqueName: \"kubernetes.io/projected/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-kube-api-access-kgwc5\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.248411 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.248549 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.517553 4861 generic.go:334] "Generic (PLEG): container finished" podID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerID="12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14" exitCode=0 Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.518523 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerDied","Data":"12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14"} Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.519470 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtmvz" event={"ID":"08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6","Type":"ContainerDied","Data":"027cde4f8e93797203c53478f536a91dba08afb775d55d93b54920df08bd33b7"} Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.519499 4861 scope.go:117] "RemoveContainer" containerID="12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.518564 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtmvz" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.550553 4861 scope.go:117] "RemoveContainer" containerID="2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.576221 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmvz"] Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.584560 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtmvz"] Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.592842 4861 scope.go:117] "RemoveContainer" containerID="bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.630041 4861 scope.go:117] "RemoveContainer" containerID="12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14" Feb 19 15:24:28 crc kubenswrapper[4861]: E0219 15:24:28.630622 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14\": container with ID starting with 12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14 not found: ID does not exist" containerID="12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.630773 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14"} err="failed to get container status \"12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14\": rpc error: code = NotFound desc = could not find container \"12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14\": container with ID starting with 12830f075646d4dca95d2c4132667f4d7e39efe39c60e93bdedb49211b97ee14 not found: ID does not exist" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.630884 4861 scope.go:117] "RemoveContainer" containerID="2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8" Feb 19 15:24:28 crc kubenswrapper[4861]: E0219 15:24:28.631246 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8\": container with ID starting with 2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8 not found: ID does not exist" containerID="2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.631387 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8"} err="failed to get container status \"2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8\": rpc error: code = NotFound desc = could not find container \"2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8\": container with ID starting with 2c814c8a0e81343ef5288a2a3bba38b43d1cb5a489f311430a4a140d072b37b8 not found: ID does not exist" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.631510 4861 scope.go:117] "RemoveContainer" containerID="bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df" Feb 19 15:24:28 crc kubenswrapper[4861]: E0219 15:24:28.632775 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df\": container with ID starting with bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df not found: ID does not exist" containerID="bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df" Feb 19 15:24:28 crc kubenswrapper[4861]: I0219 15:24:28.632920 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df"} err="failed to get container status \"bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df\": rpc error: code = NotFound desc = could not find container \"bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df\": container with ID starting with bcca42094c677113c70da86072435150d439d19bf99471deab8a2724f6a001df not found: ID does not exist" Feb 19 15:24:29 crc kubenswrapper[4861]: I0219 15:24:29.995890 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" path="/var/lib/kubelet/pods/08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6/volumes" Feb 19 15:25:03 crc kubenswrapper[4861]: I0219 15:25:03.834884 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:25:03 crc kubenswrapper[4861]: I0219 15:25:03.835652 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:25:33 crc kubenswrapper[4861]: I0219 15:25:33.833977 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:25:33 crc kubenswrapper[4861]: I0219 15:25:33.834708 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:26:03 crc kubenswrapper[4861]: I0219 15:26:03.833886 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:26:03 crc kubenswrapper[4861]: I0219 15:26:03.834842 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:26:03 crc kubenswrapper[4861]: I0219 15:26:03.834930 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:26:03 crc kubenswrapper[4861]: I0219 15:26:03.836351 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:26:03 crc kubenswrapper[4861]: I0219 15:26:03.836525 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" gracePeriod=600 Feb 19 15:26:03 crc kubenswrapper[4861]: E0219 15:26:03.982115 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:26:04 crc kubenswrapper[4861]: I0219 15:26:04.664623 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" exitCode=0 Feb 19 15:26:04 crc kubenswrapper[4861]: I0219 15:26:04.664724 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193"} Feb 19 15:26:04 crc kubenswrapper[4861]: I0219 15:26:04.665111 4861 scope.go:117] "RemoveContainer" containerID="cdf65bcd9b9cdd4f48ea53c764224ffd95c410add39c2dd90b7c23f20db59da3" Feb 19 15:26:04 crc kubenswrapper[4861]: I0219 15:26:04.666276 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:26:04 crc kubenswrapper[4861]: E0219 15:26:04.666892 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:26:15 crc kubenswrapper[4861]: I0219 15:26:15.987915 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:26:15 crc kubenswrapper[4861]: E0219 15:26:15.988692 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.181789 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5lbwr"] Feb 19 15:26:21 crc kubenswrapper[4861]: E0219 15:26:21.182815 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="registry-server" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.182830 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="registry-server" Feb 19 15:26:21 crc kubenswrapper[4861]: E0219 15:26:21.182851 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="extract-utilities" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.182859 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="extract-utilities" Feb 19 15:26:21 crc kubenswrapper[4861]: E0219 15:26:21.182879 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="extract-content" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.182888 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="extract-content" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.183144 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a6ffd6-0d60-41f1-a45e-5d6c0c2b4bc6" containerName="registry-server" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.184948 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.203345 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5lbwr"] Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.266512 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knppg\" (UniqueName: \"kubernetes.io/projected/b2ff476b-10fd-4893-bd32-28d01e6dc659-kube-api-access-knppg\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.266676 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-utilities\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.266960 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-catalog-content\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.368793 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knppg\" (UniqueName: \"kubernetes.io/projected/b2ff476b-10fd-4893-bd32-28d01e6dc659-kube-api-access-knppg\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.368978 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-utilities\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.369090 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-catalog-content\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.369782 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-utilities\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.369788 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-catalog-content\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.403756 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knppg\" (UniqueName: \"kubernetes.io/projected/b2ff476b-10fd-4893-bd32-28d01e6dc659-kube-api-access-knppg\") pod \"certified-operators-5lbwr\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:21 crc kubenswrapper[4861]: I0219 15:26:21.508109 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:22 crc kubenswrapper[4861]: I0219 15:26:22.038824 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5lbwr"] Feb 19 15:26:22 crc kubenswrapper[4861]: I0219 15:26:22.913561 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerID="271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7" exitCode=0 Feb 19 15:26:22 crc kubenswrapper[4861]: I0219 15:26:22.913718 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerDied","Data":"271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7"} Feb 19 15:26:22 crc kubenswrapper[4861]: I0219 15:26:22.914042 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerStarted","Data":"2e8a4fa76fcd9223451904ec492f6dd651fcf463c2729cd983ae6dc064e6a26a"} Feb 19 15:26:23 crc kubenswrapper[4861]: I0219 15:26:23.931541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerStarted","Data":"42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a"} Feb 19 15:26:25 crc kubenswrapper[4861]: I0219 15:26:25.958555 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerID="42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a" exitCode=0 Feb 19 15:26:25 crc kubenswrapper[4861]: I0219 15:26:25.958616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerDied","Data":"42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a"} Feb 19 15:26:26 crc kubenswrapper[4861]: I0219 15:26:26.973640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerStarted","Data":"37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255"} Feb 19 15:26:27 crc kubenswrapper[4861]: I0219 15:26:27.006262 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5lbwr" podStartSLOduration=2.557956694 podStartE2EDuration="6.006245764s" podCreationTimestamp="2026-02-19 15:26:21 +0000 UTC" firstStartedPulling="2026-02-19 15:26:22.916552276 +0000 UTC m=+8197.577655544" lastFinishedPulling="2026-02-19 15:26:26.364841356 +0000 UTC m=+8201.025944614" observedRunningTime="2026-02-19 15:26:27.004213949 +0000 UTC m=+8201.665317227" watchObservedRunningTime="2026-02-19 15:26:27.006245764 +0000 UTC m=+8201.667348992" Feb 19 15:26:29 crc kubenswrapper[4861]: I0219 15:26:29.978024 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:26:29 crc kubenswrapper[4861]: E0219 15:26:29.979101 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:26:31 crc kubenswrapper[4861]: I0219 15:26:31.508473 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:31 crc kubenswrapper[4861]: I0219 15:26:31.508844 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:31 crc kubenswrapper[4861]: I0219 15:26:31.604718 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:32 crc kubenswrapper[4861]: I0219 15:26:32.127458 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:32 crc kubenswrapper[4861]: I0219 15:26:32.193741 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5lbwr"] Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.077236 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5lbwr" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="registry-server" containerID="cri-o://37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255" gracePeriod=2 Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.568389 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.706265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knppg\" (UniqueName: \"kubernetes.io/projected/b2ff476b-10fd-4893-bd32-28d01e6dc659-kube-api-access-knppg\") pod \"b2ff476b-10fd-4893-bd32-28d01e6dc659\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.706486 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-utilities\") pod \"b2ff476b-10fd-4893-bd32-28d01e6dc659\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.706538 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-catalog-content\") pod \"b2ff476b-10fd-4893-bd32-28d01e6dc659\" (UID: \"b2ff476b-10fd-4893-bd32-28d01e6dc659\") " Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.707805 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-utilities" (OuterVolumeSpecName: "utilities") pod "b2ff476b-10fd-4893-bd32-28d01e6dc659" (UID: "b2ff476b-10fd-4893-bd32-28d01e6dc659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.715392 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ff476b-10fd-4893-bd32-28d01e6dc659-kube-api-access-knppg" (OuterVolumeSpecName: "kube-api-access-knppg") pod "b2ff476b-10fd-4893-bd32-28d01e6dc659" (UID: "b2ff476b-10fd-4893-bd32-28d01e6dc659"). InnerVolumeSpecName "kube-api-access-knppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.810785 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knppg\" (UniqueName: \"kubernetes.io/projected/b2ff476b-10fd-4893-bd32-28d01e6dc659-kube-api-access-knppg\") on node \"crc\" DevicePath \"\"" Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.810836 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.812187 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2ff476b-10fd-4893-bd32-28d01e6dc659" (UID: "b2ff476b-10fd-4893-bd32-28d01e6dc659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:26:34 crc kubenswrapper[4861]: I0219 15:26:34.913119 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ff476b-10fd-4893-bd32-28d01e6dc659-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.089186 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerID="37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255" exitCode=0 Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.089395 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerDied","Data":"37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255"} Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.089637 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lbwr" event={"ID":"b2ff476b-10fd-4893-bd32-28d01e6dc659","Type":"ContainerDied","Data":"2e8a4fa76fcd9223451904ec492f6dd651fcf463c2729cd983ae6dc064e6a26a"} Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.089665 4861 scope.go:117] "RemoveContainer" containerID="37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.089481 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lbwr" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.116166 4861 scope.go:117] "RemoveContainer" containerID="42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.144460 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5lbwr"] Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.157117 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5lbwr"] Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.163936 4861 scope.go:117] "RemoveContainer" containerID="271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.216875 4861 scope.go:117] "RemoveContainer" containerID="37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255" Feb 19 15:26:35 crc kubenswrapper[4861]: E0219 15:26:35.217601 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255\": container with ID starting with 37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255 not found: ID does not exist" containerID="37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.217798 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255"} err="failed to get container status \"37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255\": rpc error: code = NotFound desc = could not find container \"37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255\": container with ID starting with 37c339290c792f121860b6d35e94e55e6d550ee4a1130c913e7c9afbc8f5f255 not found: ID does not exist" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.217978 4861 scope.go:117] "RemoveContainer" containerID="42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a" Feb 19 15:26:35 crc kubenswrapper[4861]: E0219 15:26:35.218755 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a\": container with ID starting with 42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a not found: ID does not exist" containerID="42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.218949 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a"} err="failed to get container status \"42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a\": rpc error: code = NotFound desc = could not find container \"42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a\": container with ID starting with 42de715781c9850a731d9ed8ae2d372f1696cb1090f45ad725d4d5a9d311e39a not found: ID does not exist" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.219095 4861 scope.go:117] "RemoveContainer" containerID="271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7" Feb 19 15:26:35 crc kubenswrapper[4861]: E0219 15:26:35.219637 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7\": container with ID starting with 271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7 not found: ID does not exist" containerID="271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.219685 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7"} err="failed to get container status \"271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7\": rpc error: code = NotFound desc = could not find container \"271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7\": container with ID starting with 271fa05a5ff2ea6925de7c9f7e07003d98fa7a7923eabeeadfb25aa32f190cd7 not found: ID does not exist" Feb 19 15:26:35 crc kubenswrapper[4861]: I0219 15:26:35.996627 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" path="/var/lib/kubelet/pods/b2ff476b-10fd-4893-bd32-28d01e6dc659/volumes" Feb 19 15:26:40 crc kubenswrapper[4861]: I0219 15:26:40.977469 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:26:40 crc kubenswrapper[4861]: E0219 15:26:40.978846 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.073866 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mjfrj"] Feb 19 15:26:46 crc kubenswrapper[4861]: E0219 15:26:46.075373 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="registry-server" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.075390 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="registry-server" Feb 19 15:26:46 crc kubenswrapper[4861]: E0219 15:26:46.075407 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="extract-content" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.075413 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="extract-content" Feb 19 15:26:46 crc kubenswrapper[4861]: E0219 15:26:46.075465 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="extract-utilities" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.075475 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="extract-utilities" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.075666 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ff476b-10fd-4893-bd32-28d01e6dc659" containerName="registry-server" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.084673 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.110427 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjfrj"] Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.125393 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-utilities\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.125498 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-catalog-content\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.125544 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jrp6\" (UniqueName: \"kubernetes.io/projected/847e4b29-bb31-4a16-bbc9-4c54d7c26079-kube-api-access-4jrp6\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.227331 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-utilities\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.227389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-catalog-content\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.227433 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jrp6\" (UniqueName: \"kubernetes.io/projected/847e4b29-bb31-4a16-bbc9-4c54d7c26079-kube-api-access-4jrp6\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.228395 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-utilities\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.228477 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-catalog-content\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.247244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jrp6\" (UniqueName: \"kubernetes.io/projected/847e4b29-bb31-4a16-bbc9-4c54d7c26079-kube-api-access-4jrp6\") pod \"community-operators-mjfrj\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.411450 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:46 crc kubenswrapper[4861]: I0219 15:26:46.926889 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjfrj"] Feb 19 15:26:47 crc kubenswrapper[4861]: I0219 15:26:47.219821 4861 generic.go:334] "Generic (PLEG): container finished" podID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerID="3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c" exitCode=0 Feb 19 15:26:47 crc kubenswrapper[4861]: I0219 15:26:47.219868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerDied","Data":"3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c"} Feb 19 15:26:47 crc kubenswrapper[4861]: I0219 15:26:47.219898 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerStarted","Data":"5dd43c68dd46ad12c246c44045533143865004b860ce64849968e8e99a283bb9"} Feb 19 15:26:47 crc kubenswrapper[4861]: I0219 15:26:47.224775 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:26:48 crc kubenswrapper[4861]: I0219 15:26:48.233970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerStarted","Data":"2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928"} Feb 19 15:26:50 crc kubenswrapper[4861]: I0219 15:26:50.257395 4861 generic.go:334] "Generic (PLEG): container finished" podID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerID="2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928" exitCode=0 Feb 19 15:26:50 crc kubenswrapper[4861]: I0219 15:26:50.257509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerDied","Data":"2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928"} Feb 19 15:26:51 crc kubenswrapper[4861]: I0219 15:26:51.271107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerStarted","Data":"33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af"} Feb 19 15:26:51 crc kubenswrapper[4861]: I0219 15:26:51.302590 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mjfrj" podStartSLOduration=1.8472636740000001 podStartE2EDuration="5.302562671s" podCreationTimestamp="2026-02-19 15:26:46 +0000 UTC" firstStartedPulling="2026-02-19 15:26:47.2242353 +0000 UTC m=+8221.885338558" lastFinishedPulling="2026-02-19 15:26:50.679534327 +0000 UTC m=+8225.340637555" observedRunningTime="2026-02-19 15:26:51.294249997 +0000 UTC m=+8225.955353255" watchObservedRunningTime="2026-02-19 15:26:51.302562671 +0000 UTC m=+8225.963665929" Feb 19 15:26:52 crc kubenswrapper[4861]: I0219 15:26:52.978875 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:26:52 crc kubenswrapper[4861]: E0219 15:26:52.979683 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:26:56 crc kubenswrapper[4861]: I0219 15:26:56.412084 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:56 crc kubenswrapper[4861]: I0219 15:26:56.414087 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:56 crc kubenswrapper[4861]: I0219 15:26:56.523787 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:57 crc kubenswrapper[4861]: I0219 15:26:57.437506 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:26:57 crc kubenswrapper[4861]: I0219 15:26:57.497791 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjfrj"] Feb 19 15:26:59 crc kubenswrapper[4861]: I0219 15:26:59.364159 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mjfrj" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="registry-server" containerID="cri-o://33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af" gracePeriod=2 Feb 19 15:26:59 crc kubenswrapper[4861]: I0219 15:26:59.896074 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.067147 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-catalog-content\") pod \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.067238 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jrp6\" (UniqueName: \"kubernetes.io/projected/847e4b29-bb31-4a16-bbc9-4c54d7c26079-kube-api-access-4jrp6\") pod \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.067478 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-utilities\") pod \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\" (UID: \"847e4b29-bb31-4a16-bbc9-4c54d7c26079\") " Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.070083 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-utilities" (OuterVolumeSpecName: "utilities") pod "847e4b29-bb31-4a16-bbc9-4c54d7c26079" (UID: "847e4b29-bb31-4a16-bbc9-4c54d7c26079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.090507 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847e4b29-bb31-4a16-bbc9-4c54d7c26079-kube-api-access-4jrp6" (OuterVolumeSpecName: "kube-api-access-4jrp6") pod "847e4b29-bb31-4a16-bbc9-4c54d7c26079" (UID: "847e4b29-bb31-4a16-bbc9-4c54d7c26079"). InnerVolumeSpecName "kube-api-access-4jrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.137998 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847e4b29-bb31-4a16-bbc9-4c54d7c26079" (UID: "847e4b29-bb31-4a16-bbc9-4c54d7c26079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.170746 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.170783 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jrp6\" (UniqueName: \"kubernetes.io/projected/847e4b29-bb31-4a16-bbc9-4c54d7c26079-kube-api-access-4jrp6\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.170798 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e4b29-bb31-4a16-bbc9-4c54d7c26079-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.382360 4861 generic.go:334] "Generic (PLEG): container finished" podID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerID="33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af" exitCode=0 Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.382460 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjfrj" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.382458 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerDied","Data":"33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af"} Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.382528 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjfrj" event={"ID":"847e4b29-bb31-4a16-bbc9-4c54d7c26079","Type":"ContainerDied","Data":"5dd43c68dd46ad12c246c44045533143865004b860ce64849968e8e99a283bb9"} Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.382566 4861 scope.go:117] "RemoveContainer" containerID="33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.419817 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjfrj"] Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.428250 4861 scope.go:117] "RemoveContainer" containerID="2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.431846 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mjfrj"] Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.457454 4861 scope.go:117] "RemoveContainer" containerID="3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.534531 4861 scope.go:117] "RemoveContainer" containerID="33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af" Feb 19 15:27:00 crc kubenswrapper[4861]: E0219 15:27:00.534860 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af\": container with ID starting with 33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af not found: ID does not exist" containerID="33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.534884 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af"} err="failed to get container status \"33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af\": rpc error: code = NotFound desc = could not find container \"33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af\": container with ID starting with 33d1b8573618169f779e6d084153685c22b92f2792f7642c24fdc730730cf6af not found: ID does not exist" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.534906 4861 scope.go:117] "RemoveContainer" containerID="2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928" Feb 19 15:27:00 crc kubenswrapper[4861]: E0219 15:27:00.535144 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928\": container with ID starting with 2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928 not found: ID does not exist" containerID="2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.535165 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928"} err="failed to get container status \"2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928\": rpc error: code = NotFound desc = could not find container \"2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928\": container with ID starting with 2f6df2f6b3fabca3b12bb12013aff6ac074618e9e84bb8add08f5d089807e928 not found: ID does not exist" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.535178 4861 scope.go:117] "RemoveContainer" containerID="3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c" Feb 19 15:27:00 crc kubenswrapper[4861]: E0219 15:27:00.535950 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c\": container with ID starting with 3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c not found: ID does not exist" containerID="3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c" Feb 19 15:27:00 crc kubenswrapper[4861]: I0219 15:27:00.536013 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c"} err="failed to get container status \"3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c\": rpc error: code = NotFound desc = could not find container \"3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c\": container with ID starting with 3f30ea6372ec98edcbd3b3323040f56df22e50f5f2accf3da9747774498cd66c not found: ID does not exist" Feb 19 15:27:02 crc kubenswrapper[4861]: I0219 15:27:01.997545 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" path="/var/lib/kubelet/pods/847e4b29-bb31-4a16-bbc9-4c54d7c26079/volumes" Feb 19 15:27:05 crc kubenswrapper[4861]: I0219 15:27:05.986218 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:27:05 crc kubenswrapper[4861]: E0219 15:27:05.986719 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:27:08 crc kubenswrapper[4861]: I0219 15:27:08.477691 4861 generic.go:334] "Generic (PLEG): container finished" podID="03e9e117-86ff-40f4-97c2-bbb611cd3cd9" containerID="662fb4090e66469355a2111c1a9314b93c0da573ba4945537aeabaf694f34c9e" exitCode=0 Feb 19 15:27:08 crc kubenswrapper[4861]: I0219 15:27:08.477751 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-klvns" event={"ID":"03e9e117-86ff-40f4-97c2-bbb611cd3cd9","Type":"ContainerDied","Data":"662fb4090e66469355a2111c1a9314b93c0da573ba4945537aeabaf694f34c9e"} Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.080736 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.125010 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-combined-ca-bundle\") pod \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.125464 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-inventory\") pod \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.125499 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-ssh-key-openstack-cell1\") pod \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.125582 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-secret-0\") pod \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.125676 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfrzs\" (UniqueName: \"kubernetes.io/projected/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-kube-api-access-xfrzs\") pod \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\" (UID: \"03e9e117-86ff-40f4-97c2-bbb611cd3cd9\") " Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.133821 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "03e9e117-86ff-40f4-97c2-bbb611cd3cd9" (UID: "03e9e117-86ff-40f4-97c2-bbb611cd3cd9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.134756 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-kube-api-access-xfrzs" (OuterVolumeSpecName: "kube-api-access-xfrzs") pod "03e9e117-86ff-40f4-97c2-bbb611cd3cd9" (UID: "03e9e117-86ff-40f4-97c2-bbb611cd3cd9"). InnerVolumeSpecName "kube-api-access-xfrzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.163301 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "03e9e117-86ff-40f4-97c2-bbb611cd3cd9" (UID: "03e9e117-86ff-40f4-97c2-bbb611cd3cd9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.163365 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-inventory" (OuterVolumeSpecName: "inventory") pod "03e9e117-86ff-40f4-97c2-bbb611cd3cd9" (UID: "03e9e117-86ff-40f4-97c2-bbb611cd3cd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.163720 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "03e9e117-86ff-40f4-97c2-bbb611cd3cd9" (UID: "03e9e117-86ff-40f4-97c2-bbb611cd3cd9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.228007 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfrzs\" (UniqueName: \"kubernetes.io/projected/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-kube-api-access-xfrzs\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.228035 4861 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.228046 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.228056 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.228066 4861 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/03e9e117-86ff-40f4-97c2-bbb611cd3cd9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.516570 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-klvns" event={"ID":"03e9e117-86ff-40f4-97c2-bbb611cd3cd9","Type":"ContainerDied","Data":"1666084be61965987089c9d09553fb84b92af12b2d2da8ef97c99e654d92fce7"} Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.516634 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1666084be61965987089c9d09553fb84b92af12b2d2da8ef97c99e654d92fce7" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.516724 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-klvns" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.657627 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-vghd2"] Feb 19 15:27:10 crc kubenswrapper[4861]: E0219 15:27:10.658165 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e9e117-86ff-40f4-97c2-bbb611cd3cd9" containerName="libvirt-openstack-openstack-cell1" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.658188 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e9e117-86ff-40f4-97c2-bbb611cd3cd9" containerName="libvirt-openstack-openstack-cell1" Feb 19 15:27:10 crc kubenswrapper[4861]: E0219 15:27:10.658216 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="registry-server" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.658227 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="registry-server" Feb 19 15:27:10 crc kubenswrapper[4861]: E0219 15:27:10.658241 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="extract-utilities" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.658249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="extract-utilities" Feb 19 15:27:10 crc kubenswrapper[4861]: E0219 15:27:10.658278 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="extract-content" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.658287 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="extract-content" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.658572 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e9e117-86ff-40f4-97c2-bbb611cd3cd9" containerName="libvirt-openstack-openstack-cell1" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.658599 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="847e4b29-bb31-4a16-bbc9-4c54d7c26079" containerName="registry-server" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.659491 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.663559 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.663598 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.663643 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.664310 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.664590 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.664815 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.664957 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.670902 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-vghd2"] Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741648 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741723 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjzg\" (UniqueName: \"kubernetes.io/projected/00127d70-73bb-4b8e-8268-a6f858a14e41-kube-api-access-wfjzg\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741839 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.741938 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.742026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.742099 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.742133 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.742165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-inventory\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843564 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843613 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843670 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843688 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843716 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-inventory\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843763 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843794 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843814 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843862 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjzg\" (UniqueName: \"kubernetes.io/projected/00127d70-73bb-4b8e-8268-a6f858a14e41-kube-api-access-wfjzg\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.843904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.845028 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.848208 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.850534 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.850636 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.851071 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.851161 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.851210 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.851340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-inventory\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.852793 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.864798 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.866535 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjzg\" (UniqueName: \"kubernetes.io/projected/00127d70-73bb-4b8e-8268-a6f858a14e41-kube-api-access-wfjzg\") pod \"nova-cell1-openstack-openstack-cell1-vghd2\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:10 crc kubenswrapper[4861]: I0219 15:27:10.993080 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:27:11 crc kubenswrapper[4861]: I0219 15:27:11.572654 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-vghd2"] Feb 19 15:27:12 crc kubenswrapper[4861]: I0219 15:27:12.548653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" event={"ID":"00127d70-73bb-4b8e-8268-a6f858a14e41","Type":"ContainerStarted","Data":"1ccdcdd865e2973c54bcdef3f382c592b3ced264820a08b94f488b24b50d05d1"} Feb 19 15:27:12 crc kubenswrapper[4861]: I0219 15:27:12.549355 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" event={"ID":"00127d70-73bb-4b8e-8268-a6f858a14e41","Type":"ContainerStarted","Data":"1ae77805c2a59e581e24b1059b1252f61c77ae001e7a5ca7825bc108fd1d2ec6"} Feb 19 15:27:12 crc kubenswrapper[4861]: I0219 15:27:12.600667 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" podStartSLOduration=2.073435214 podStartE2EDuration="2.600637997s" podCreationTimestamp="2026-02-19 15:27:10 +0000 UTC" firstStartedPulling="2026-02-19 15:27:11.568624992 +0000 UTC m=+8246.229728230" lastFinishedPulling="2026-02-19 15:27:12.095827745 +0000 UTC m=+8246.756931013" observedRunningTime="2026-02-19 15:27:12.588950621 +0000 UTC m=+8247.250053869" watchObservedRunningTime="2026-02-19 15:27:12.600637997 +0000 UTC m=+8247.261741265" Feb 19 15:27:16 crc kubenswrapper[4861]: I0219 15:27:16.977516 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:27:16 crc kubenswrapper[4861]: E0219 15:27:16.978696 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:27:29 crc kubenswrapper[4861]: I0219 15:27:29.980560 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:27:29 crc kubenswrapper[4861]: E0219 15:27:29.981467 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.029780 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdvzw"] Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.034046 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.049374 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdvzw"] Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.108681 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-utilities\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.108783 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-catalog-content\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.108864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh4sc\" (UniqueName: \"kubernetes.io/projected/411a3faa-cfd0-459d-8599-b7052fad17db-kube-api-access-gh4sc\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.211156 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-utilities\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.211305 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-catalog-content\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.211427 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh4sc\" (UniqueName: \"kubernetes.io/projected/411a3faa-cfd0-459d-8599-b7052fad17db-kube-api-access-gh4sc\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.211714 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-utilities\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.211792 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-catalog-content\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.246047 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh4sc\" (UniqueName: \"kubernetes.io/projected/411a3faa-cfd0-459d-8599-b7052fad17db-kube-api-access-gh4sc\") pod \"redhat-operators-sdvzw\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.377218 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:34 crc kubenswrapper[4861]: I0219 15:27:34.893486 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdvzw"] Feb 19 15:27:35 crc kubenswrapper[4861]: I0219 15:27:35.816278 4861 generic.go:334] "Generic (PLEG): container finished" podID="411a3faa-cfd0-459d-8599-b7052fad17db" containerID="fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db" exitCode=0 Feb 19 15:27:35 crc kubenswrapper[4861]: I0219 15:27:35.816353 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerDied","Data":"fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db"} Feb 19 15:27:35 crc kubenswrapper[4861]: I0219 15:27:35.816667 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerStarted","Data":"5237ca056b690fc9dacb1a5be67e4082676a08abd1bfadd12a50df516b186a47"} Feb 19 15:27:37 crc kubenswrapper[4861]: I0219 15:27:37.843990 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerStarted","Data":"8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9"} Feb 19 15:27:40 crc kubenswrapper[4861]: I0219 15:27:40.887253 4861 generic.go:334] "Generic (PLEG): container finished" podID="411a3faa-cfd0-459d-8599-b7052fad17db" containerID="8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9" exitCode=0 Feb 19 15:27:40 crc kubenswrapper[4861]: I0219 15:27:40.887366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerDied","Data":"8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9"} Feb 19 15:27:41 crc kubenswrapper[4861]: I0219 15:27:41.901721 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerStarted","Data":"719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef"} Feb 19 15:27:41 crc kubenswrapper[4861]: I0219 15:27:41.930033 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdvzw" podStartSLOduration=3.400397135 podStartE2EDuration="8.93000735s" podCreationTimestamp="2026-02-19 15:27:33 +0000 UTC" firstStartedPulling="2026-02-19 15:27:35.818097317 +0000 UTC m=+8270.479200555" lastFinishedPulling="2026-02-19 15:27:41.347707542 +0000 UTC m=+8276.008810770" observedRunningTime="2026-02-19 15:27:41.923637329 +0000 UTC m=+8276.584740597" watchObservedRunningTime="2026-02-19 15:27:41.93000735 +0000 UTC m=+8276.591110588" Feb 19 15:27:43 crc kubenswrapper[4861]: I0219 15:27:43.977824 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:27:43 crc kubenswrapper[4861]: E0219 15:27:43.978889 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:27:44 crc kubenswrapper[4861]: I0219 15:27:44.378115 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:44 crc kubenswrapper[4861]: I0219 15:27:44.378164 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:45 crc kubenswrapper[4861]: I0219 15:27:45.430784 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sdvzw" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="registry-server" probeResult="failure" output=< Feb 19 15:27:45 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:27:45 crc kubenswrapper[4861]: > Feb 19 15:27:54 crc kubenswrapper[4861]: I0219 15:27:54.440409 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:54 crc kubenswrapper[4861]: I0219 15:27:54.519859 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:54 crc kubenswrapper[4861]: I0219 15:27:54.700412 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdvzw"] Feb 19 15:27:55 crc kubenswrapper[4861]: I0219 15:27:55.986820 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:27:55 crc kubenswrapper[4861]: E0219 15:27:55.987138 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.078236 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdvzw" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="registry-server" containerID="cri-o://719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef" gracePeriod=2 Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.583164 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.778057 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh4sc\" (UniqueName: \"kubernetes.io/projected/411a3faa-cfd0-459d-8599-b7052fad17db-kube-api-access-gh4sc\") pod \"411a3faa-cfd0-459d-8599-b7052fad17db\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.778535 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-catalog-content\") pod \"411a3faa-cfd0-459d-8599-b7052fad17db\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.778774 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-utilities\") pod \"411a3faa-cfd0-459d-8599-b7052fad17db\" (UID: \"411a3faa-cfd0-459d-8599-b7052fad17db\") " Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.779623 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-utilities" (OuterVolumeSpecName: "utilities") pod "411a3faa-cfd0-459d-8599-b7052fad17db" (UID: "411a3faa-cfd0-459d-8599-b7052fad17db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.802905 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411a3faa-cfd0-459d-8599-b7052fad17db-kube-api-access-gh4sc" (OuterVolumeSpecName: "kube-api-access-gh4sc") pod "411a3faa-cfd0-459d-8599-b7052fad17db" (UID: "411a3faa-cfd0-459d-8599-b7052fad17db"). InnerVolumeSpecName "kube-api-access-gh4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.881536 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.881774 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh4sc\" (UniqueName: \"kubernetes.io/projected/411a3faa-cfd0-459d-8599-b7052fad17db-kube-api-access-gh4sc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.925359 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "411a3faa-cfd0-459d-8599-b7052fad17db" (UID: "411a3faa-cfd0-459d-8599-b7052fad17db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:27:56 crc kubenswrapper[4861]: I0219 15:27:56.983758 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411a3faa-cfd0-459d-8599-b7052fad17db-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.091031 4861 generic.go:334] "Generic (PLEG): container finished" podID="411a3faa-cfd0-459d-8599-b7052fad17db" containerID="719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef" exitCode=0 Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.091174 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdvzw" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.091207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerDied","Data":"719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef"} Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.092366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdvzw" event={"ID":"411a3faa-cfd0-459d-8599-b7052fad17db","Type":"ContainerDied","Data":"5237ca056b690fc9dacb1a5be67e4082676a08abd1bfadd12a50df516b186a47"} Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.092386 4861 scope.go:117] "RemoveContainer" containerID="719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.125497 4861 scope.go:117] "RemoveContainer" containerID="8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.130351 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdvzw"] Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.140663 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdvzw"] Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.158280 4861 scope.go:117] "RemoveContainer" containerID="fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.218462 4861 scope.go:117] "RemoveContainer" containerID="719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef" Feb 19 15:27:57 crc kubenswrapper[4861]: E0219 15:27:57.218974 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef\": container with ID starting with 719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef not found: ID does not exist" containerID="719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.219023 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef"} err="failed to get container status \"719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef\": rpc error: code = NotFound desc = could not find container \"719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef\": container with ID starting with 719f3eff7a1a99e91976f16460d44435da20cc2e12c99bd2cff113eef361bdef not found: ID does not exist" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.219051 4861 scope.go:117] "RemoveContainer" containerID="8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9" Feb 19 15:27:57 crc kubenswrapper[4861]: E0219 15:27:57.219370 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9\": container with ID starting with 8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9 not found: ID does not exist" containerID="8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.219401 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9"} err="failed to get container status \"8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9\": rpc error: code = NotFound desc = could not find container \"8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9\": container with ID starting with 8380a235b0860ce3aef9d5ba9bd00ce16a1655064ea0c160fcb63a17c09b86a9 not found: ID does not exist" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.219440 4861 scope.go:117] "RemoveContainer" containerID="fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db" Feb 19 15:27:57 crc kubenswrapper[4861]: E0219 15:27:57.219902 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db\": container with ID starting with fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db not found: ID does not exist" containerID="fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.220023 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db"} err="failed to get container status \"fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db\": rpc error: code = NotFound desc = could not find container \"fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db\": container with ID starting with fe11bd09b2c2a1d6823bc360c446636da836d143132420263a613c10e4f271db not found: ID does not exist" Feb 19 15:27:57 crc kubenswrapper[4861]: I0219 15:27:57.996851 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" path="/var/lib/kubelet/pods/411a3faa-cfd0-459d-8599-b7052fad17db/volumes" Feb 19 15:28:10 crc kubenswrapper[4861]: I0219 15:28:10.977295 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:28:10 crc kubenswrapper[4861]: E0219 15:28:10.978716 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:28:22 crc kubenswrapper[4861]: I0219 15:28:22.977845 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:28:22 crc kubenswrapper[4861]: E0219 15:28:22.978717 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:28:35 crc kubenswrapper[4861]: I0219 15:28:35.990967 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:28:35 crc kubenswrapper[4861]: E0219 15:28:35.992159 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:28:48 crc kubenswrapper[4861]: I0219 15:28:48.977639 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:28:48 crc kubenswrapper[4861]: E0219 15:28:48.978922 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:29:01 crc kubenswrapper[4861]: I0219 15:29:01.977652 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:29:01 crc kubenswrapper[4861]: E0219 15:29:01.979883 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:29:13 crc kubenswrapper[4861]: I0219 15:29:13.977781 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:29:13 crc kubenswrapper[4861]: E0219 15:29:13.978998 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:29:24 crc kubenswrapper[4861]: I0219 15:29:24.977394 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:29:24 crc kubenswrapper[4861]: E0219 15:29:24.980700 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:29:38 crc kubenswrapper[4861]: I0219 15:29:38.977401 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:29:38 crc kubenswrapper[4861]: E0219 15:29:38.978563 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:29:49 crc kubenswrapper[4861]: I0219 15:29:49.977966 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:29:49 crc kubenswrapper[4861]: E0219 15:29:49.979070 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:29:58 crc kubenswrapper[4861]: I0219 15:29:58.577430 4861 generic.go:334] "Generic (PLEG): container finished" podID="00127d70-73bb-4b8e-8268-a6f858a14e41" containerID="1ccdcdd865e2973c54bcdef3f382c592b3ced264820a08b94f488b24b50d05d1" exitCode=0 Feb 19 15:29:58 crc kubenswrapper[4861]: I0219 15:29:58.577462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" event={"ID":"00127d70-73bb-4b8e-8268-a6f858a14e41","Type":"ContainerDied","Data":"1ccdcdd865e2973c54bcdef3f382c592b3ced264820a08b94f488b24b50d05d1"} Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.082517 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.167547 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv"] Feb 19 15:30:00 crc kubenswrapper[4861]: E0219 15:30:00.168210 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="extract-content" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.168231 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="extract-content" Feb 19 15:30:00 crc kubenswrapper[4861]: E0219 15:30:00.168254 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="registry-server" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.168260 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="registry-server" Feb 19 15:30:00 crc kubenswrapper[4861]: E0219 15:30:00.168282 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00127d70-73bb-4b8e-8268-a6f858a14e41" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.168291 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="00127d70-73bb-4b8e-8268-a6f858a14e41" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 15:30:00 crc kubenswrapper[4861]: E0219 15:30:00.168308 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="extract-utilities" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.168316 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="extract-utilities" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.168536 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="411a3faa-cfd0-459d-8599-b7052fad17db" containerName="registry-server" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.168550 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="00127d70-73bb-4b8e-8268-a6f858a14e41" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.169259 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.171872 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.171954 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.177557 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv"] Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.218920 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-1\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219035 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-0\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219197 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cells-global-config-0\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219256 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-ssh-key-openstack-cell1\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219287 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-combined-ca-bundle\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjzg\" (UniqueName: \"kubernetes.io/projected/00127d70-73bb-4b8e-8268-a6f858a14e41-kube-api-access-wfjzg\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219336 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-inventory\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219365 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-2\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219443 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-0\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219468 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-3\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.219505 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-1\") pod \"00127d70-73bb-4b8e-8268-a6f858a14e41\" (UID: \"00127d70-73bb-4b8e-8268-a6f858a14e41\") " Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.234432 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.235309 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00127d70-73bb-4b8e-8268-a6f858a14e41-kube-api-access-wfjzg" (OuterVolumeSpecName: "kube-api-access-wfjzg") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "kube-api-access-wfjzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.252519 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.257114 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.260697 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.263565 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.265936 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.273292 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.279059 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.283479 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.296585 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-inventory" (OuterVolumeSpecName: "inventory") pod "00127d70-73bb-4b8e-8268-a6f858a14e41" (UID: "00127d70-73bb-4b8e-8268-a6f858a14e41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.322751 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/819abd45-e037-4f2e-aa92-722e0a919687-secret-volume\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.322828 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbqg\" (UniqueName: \"kubernetes.io/projected/819abd45-e037-4f2e-aa92-722e0a919687-kube-api-access-dwbqg\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.322907 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819abd45-e037-4f2e-aa92-722e0a919687-config-volume\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323155 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjzg\" (UniqueName: \"kubernetes.io/projected/00127d70-73bb-4b8e-8268-a6f858a14e41-kube-api-access-wfjzg\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323179 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323193 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323206 4861 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323218 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323229 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323241 4861 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323252 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323264 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323278 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.323289 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00127d70-73bb-4b8e-8268-a6f858a14e41-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.425569 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/819abd45-e037-4f2e-aa92-722e0a919687-secret-volume\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.425650 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbqg\" (UniqueName: \"kubernetes.io/projected/819abd45-e037-4f2e-aa92-722e0a919687-kube-api-access-dwbqg\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.425704 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819abd45-e037-4f2e-aa92-722e0a919687-config-volume\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.426594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819abd45-e037-4f2e-aa92-722e0a919687-config-volume\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.431148 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/819abd45-e037-4f2e-aa92-722e0a919687-secret-volume\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.447097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbqg\" (UniqueName: \"kubernetes.io/projected/819abd45-e037-4f2e-aa92-722e0a919687-kube-api-access-dwbqg\") pod \"collect-profiles-29525250-gx5cv\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.489639 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.604715 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" event={"ID":"00127d70-73bb-4b8e-8268-a6f858a14e41","Type":"ContainerDied","Data":"1ae77805c2a59e581e24b1059b1252f61c77ae001e7a5ca7825bc108fd1d2ec6"} Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.604984 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae77805c2a59e581e24b1059b1252f61c77ae001e7a5ca7825bc108fd1d2ec6" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.604924 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vghd2" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.716496 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-k48pq"] Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.718200 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.720555 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.720771 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.723235 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.723257 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.723610 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.742406 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-k48pq"] Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834552 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834658 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834767 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834823 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-inventory\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834851 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834883 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvc9\" (UniqueName: \"kubernetes.io/projected/266f77b9-e649-47f8-8f78-735c0393960f-kube-api-access-kqvc9\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.834924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.937886 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.939208 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-inventory\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.939254 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.939297 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvc9\" (UniqueName: \"kubernetes.io/projected/266f77b9-e649-47f8-8f78-735c0393960f-kube-api-access-kqvc9\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.939754 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.939981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.940037 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.942819 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.944640 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.944860 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.944931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.946244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.948142 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-inventory\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:00 crc kubenswrapper[4861]: I0219 15:30:00.960490 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvc9\" (UniqueName: \"kubernetes.io/projected/266f77b9-e649-47f8-8f78-735c0393960f-kube-api-access-kqvc9\") pod \"telemetry-openstack-openstack-cell1-k48pq\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.035338 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.037888 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv"] Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.498096 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-k48pq"] Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.614248 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" event={"ID":"819abd45-e037-4f2e-aa92-722e0a919687","Type":"ContainerStarted","Data":"028ccc7a821fcd937a3448fcb7947132ed538317ba85ff8ad087c5d0d5cd0b92"} Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.614298 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" event={"ID":"819abd45-e037-4f2e-aa92-722e0a919687","Type":"ContainerStarted","Data":"67501f5465da695b2adaa0765c67fa5ef2ecbe6cc9b6a4fbb71f5301d1bae19f"} Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.619846 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" event={"ID":"266f77b9-e649-47f8-8f78-735c0393960f","Type":"ContainerStarted","Data":"7d40d8158649d52b306d2ad3c1fe079693f969484b7dc265eec92fe0c33959e2"} Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.637686 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" podStartSLOduration=1.637672037 podStartE2EDuration="1.637672037s" podCreationTimestamp="2026-02-19 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:01.633965177 +0000 UTC m=+8416.295068405" watchObservedRunningTime="2026-02-19 15:30:01.637672037 +0000 UTC m=+8416.298775255" Feb 19 15:30:01 crc kubenswrapper[4861]: I0219 15:30:01.977782 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:30:01 crc kubenswrapper[4861]: E0219 15:30:01.978094 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:30:02 crc kubenswrapper[4861]: I0219 15:30:02.631982 4861 generic.go:334] "Generic (PLEG): container finished" podID="819abd45-e037-4f2e-aa92-722e0a919687" containerID="028ccc7a821fcd937a3448fcb7947132ed538317ba85ff8ad087c5d0d5cd0b92" exitCode=0 Feb 19 15:30:02 crc kubenswrapper[4861]: I0219 15:30:02.632194 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" event={"ID":"819abd45-e037-4f2e-aa92-722e0a919687","Type":"ContainerDied","Data":"028ccc7a821fcd937a3448fcb7947132ed538317ba85ff8ad087c5d0d5cd0b92"} Feb 19 15:30:02 crc kubenswrapper[4861]: I0219 15:30:02.635050 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" event={"ID":"266f77b9-e649-47f8-8f78-735c0393960f","Type":"ContainerStarted","Data":"c7cc50e3b8fabf1a3e09f476573c5f8fd39af7c020782ecd6614bc5d9d2c5c70"} Feb 19 15:30:02 crc kubenswrapper[4861]: I0219 15:30:02.683089 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" podStartSLOduration=2.088151462 podStartE2EDuration="2.683062971s" podCreationTimestamp="2026-02-19 15:30:00 +0000 UTC" firstStartedPulling="2026-02-19 15:30:01.501548316 +0000 UTC m=+8416.162651544" lastFinishedPulling="2026-02-19 15:30:02.096459825 +0000 UTC m=+8416.757563053" observedRunningTime="2026-02-19 15:30:02.676959296 +0000 UTC m=+8417.338062524" watchObservedRunningTime="2026-02-19 15:30:02.683062971 +0000 UTC m=+8417.344166209" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.080873 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.111401 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbqg\" (UniqueName: \"kubernetes.io/projected/819abd45-e037-4f2e-aa92-722e0a919687-kube-api-access-dwbqg\") pod \"819abd45-e037-4f2e-aa92-722e0a919687\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.111516 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819abd45-e037-4f2e-aa92-722e0a919687-config-volume\") pod \"819abd45-e037-4f2e-aa92-722e0a919687\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.111872 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/819abd45-e037-4f2e-aa92-722e0a919687-secret-volume\") pod \"819abd45-e037-4f2e-aa92-722e0a919687\" (UID: \"819abd45-e037-4f2e-aa92-722e0a919687\") " Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.117887 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819abd45-e037-4f2e-aa92-722e0a919687-config-volume" (OuterVolumeSpecName: "config-volume") pod "819abd45-e037-4f2e-aa92-722e0a919687" (UID: "819abd45-e037-4f2e-aa92-722e0a919687"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.120446 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819abd45-e037-4f2e-aa92-722e0a919687-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "819abd45-e037-4f2e-aa92-722e0a919687" (UID: "819abd45-e037-4f2e-aa92-722e0a919687"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.124772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819abd45-e037-4f2e-aa92-722e0a919687-kube-api-access-dwbqg" (OuterVolumeSpecName: "kube-api-access-dwbqg") pod "819abd45-e037-4f2e-aa92-722e0a919687" (UID: "819abd45-e037-4f2e-aa92-722e0a919687"). InnerVolumeSpecName "kube-api-access-dwbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.215258 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/819abd45-e037-4f2e-aa92-722e0a919687-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.215302 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbqg\" (UniqueName: \"kubernetes.io/projected/819abd45-e037-4f2e-aa92-722e0a919687-kube-api-access-dwbqg\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.215317 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819abd45-e037-4f2e-aa92-722e0a919687-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.668027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" event={"ID":"819abd45-e037-4f2e-aa92-722e0a919687","Type":"ContainerDied","Data":"67501f5465da695b2adaa0765c67fa5ef2ecbe6cc9b6a4fbb71f5301d1bae19f"} Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.668797 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67501f5465da695b2adaa0765c67fa5ef2ecbe6cc9b6a4fbb71f5301d1bae19f" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.668162 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-gx5cv" Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.741593 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9"] Feb 19 15:30:04 crc kubenswrapper[4861]: I0219 15:30:04.753391 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525205-g8jq9"] Feb 19 15:30:05 crc kubenswrapper[4861]: I0219 15:30:05.997466 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bd39f2-e28e-475c-bc50-c430217d93a6" path="/var/lib/kubelet/pods/a1bd39f2-e28e-475c-bc50-c430217d93a6/volumes" Feb 19 15:30:12 crc kubenswrapper[4861]: I0219 15:30:12.826395 4861 scope.go:117] "RemoveContainer" containerID="a1ab56604003f58281ece631e51ce10f2928728c254b8745236182358b3d1e2c" Feb 19 15:30:16 crc kubenswrapper[4861]: I0219 15:30:16.021514 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:30:16 crc kubenswrapper[4861]: E0219 15:30:16.022501 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:30:29 crc kubenswrapper[4861]: I0219 15:30:29.977968 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:30:29 crc kubenswrapper[4861]: E0219 15:30:29.980008 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:30:42 crc kubenswrapper[4861]: I0219 15:30:42.978062 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:30:42 crc kubenswrapper[4861]: E0219 15:30:42.979397 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:30:53 crc kubenswrapper[4861]: I0219 15:30:53.977750 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:30:53 crc kubenswrapper[4861]: E0219 15:30:53.978673 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:31:04 crc kubenswrapper[4861]: I0219 15:31:04.978130 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:31:06 crc kubenswrapper[4861]: I0219 15:31:06.394579 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"42bdaaa988e3d709e147c606ea5831a8547288d81b47645909bdc0769d2fee21"} Feb 19 15:33:32 crc kubenswrapper[4861]: I0219 15:33:32.131250 4861 generic.go:334] "Generic (PLEG): container finished" podID="266f77b9-e649-47f8-8f78-735c0393960f" containerID="c7cc50e3b8fabf1a3e09f476573c5f8fd39af7c020782ecd6614bc5d9d2c5c70" exitCode=0 Feb 19 15:33:32 crc kubenswrapper[4861]: I0219 15:33:32.131348 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" event={"ID":"266f77b9-e649-47f8-8f78-735c0393960f","Type":"ContainerDied","Data":"c7cc50e3b8fabf1a3e09f476573c5f8fd39af7c020782ecd6614bc5d9d2c5c70"} Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.734685 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.833823 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.833881 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.862531 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-0\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.862603 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqvc9\" (UniqueName: \"kubernetes.io/projected/266f77b9-e649-47f8-8f78-735c0393960f-kube-api-access-kqvc9\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.863612 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-inventory\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.863776 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-1\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.863847 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-2\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.863887 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-telemetry-combined-ca-bundle\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.863969 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ssh-key-openstack-cell1\") pod \"266f77b9-e649-47f8-8f78-735c0393960f\" (UID: \"266f77b9-e649-47f8-8f78-735c0393960f\") " Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.870386 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.872261 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266f77b9-e649-47f8-8f78-735c0393960f-kube-api-access-kqvc9" (OuterVolumeSpecName: "kube-api-access-kqvc9") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "kube-api-access-kqvc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.895651 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.897856 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.898714 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-inventory" (OuterVolumeSpecName: "inventory") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.901591 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.914376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "266f77b9-e649-47f8-8f78-735c0393960f" (UID: "266f77b9-e649-47f8-8f78-735c0393960f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.966921 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.966974 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqvc9\" (UniqueName: \"kubernetes.io/projected/266f77b9-e649-47f8-8f78-735c0393960f-kube-api-access-kqvc9\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.966988 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.967001 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.967015 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.967027 4861 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:33 crc kubenswrapper[4861]: I0219 15:33:33.967039 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/266f77b9-e649-47f8-8f78-735c0393960f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.155526 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" event={"ID":"266f77b9-e649-47f8-8f78-735c0393960f","Type":"ContainerDied","Data":"7d40d8158649d52b306d2ad3c1fe079693f969484b7dc265eec92fe0c33959e2"} Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.155591 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d40d8158649d52b306d2ad3c1fe079693f969484b7dc265eec92fe0c33959e2" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.155631 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-k48pq" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.265482 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-xcjzx"] Feb 19 15:33:34 crc kubenswrapper[4861]: E0219 15:33:34.266637 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819abd45-e037-4f2e-aa92-722e0a919687" containerName="collect-profiles" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.266758 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="819abd45-e037-4f2e-aa92-722e0a919687" containerName="collect-profiles" Feb 19 15:33:34 crc kubenswrapper[4861]: E0219 15:33:34.266850 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266f77b9-e649-47f8-8f78-735c0393960f" containerName="telemetry-openstack-openstack-cell1" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.266921 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="266f77b9-e649-47f8-8f78-735c0393960f" containerName="telemetry-openstack-openstack-cell1" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.267274 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="819abd45-e037-4f2e-aa92-722e0a919687" containerName="collect-profiles" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.267392 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="266f77b9-e649-47f8-8f78-735c0393960f" containerName="telemetry-openstack-openstack-cell1" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.268669 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.271274 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.271529 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.271753 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.271963 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.273106 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.280146 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-xcjzx"] Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.375296 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.375383 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f27t\" (UniqueName: \"kubernetes.io/projected/da08be56-9fc3-4723-afec-e824bcea0208-kube-api-access-6f27t\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.375513 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.375634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.375707 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.478915 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.479069 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f27t\" (UniqueName: \"kubernetes.io/projected/da08be56-9fc3-4723-afec-e824bcea0208-kube-api-access-6f27t\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.479157 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.479252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.479325 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.486192 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.486304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.486558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.498215 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.500233 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f27t\" (UniqueName: \"kubernetes.io/projected/da08be56-9fc3-4723-afec-e824bcea0208-kube-api-access-6f27t\") pod \"neutron-sriov-openstack-openstack-cell1-xcjzx\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.586044 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.931528 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-xcjzx"] Feb 19 15:33:34 crc kubenswrapper[4861]: I0219 15:33:34.938711 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:33:35 crc kubenswrapper[4861]: I0219 15:33:35.165653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" event={"ID":"da08be56-9fc3-4723-afec-e824bcea0208","Type":"ContainerStarted","Data":"b97b15013339b6f1e0c9099e191dad1e837745fdf5609bb4194fc7975934c544"} Feb 19 15:33:36 crc kubenswrapper[4861]: I0219 15:33:36.177724 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" event={"ID":"da08be56-9fc3-4723-afec-e824bcea0208","Type":"ContainerStarted","Data":"4a68a72f61e271d91a5b0f7b82d8e7a799b14c7d37248c1a22cdc6298fac0ec7"} Feb 19 15:33:36 crc kubenswrapper[4861]: I0219 15:33:36.197856 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" podStartSLOduration=1.639428992 podStartE2EDuration="2.197840199s" podCreationTimestamp="2026-02-19 15:33:34 +0000 UTC" firstStartedPulling="2026-02-19 15:33:34.938529023 +0000 UTC m=+8629.599632251" lastFinishedPulling="2026-02-19 15:33:35.49694022 +0000 UTC m=+8630.158043458" observedRunningTime="2026-02-19 15:33:36.19267275 +0000 UTC m=+8630.853775978" watchObservedRunningTime="2026-02-19 15:33:36.197840199 +0000 UTC m=+8630.858943427" Feb 19 15:34:03 crc kubenswrapper[4861]: I0219 15:34:03.834537 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:34:03 crc kubenswrapper[4861]: I0219 15:34:03.835342 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:34:33 crc kubenswrapper[4861]: I0219 15:34:33.834499 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:34:33 crc kubenswrapper[4861]: I0219 15:34:33.835016 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:34:33 crc kubenswrapper[4861]: I0219 15:34:33.835058 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:34:33 crc kubenswrapper[4861]: I0219 15:34:33.835828 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42bdaaa988e3d709e147c606ea5831a8547288d81b47645909bdc0769d2fee21"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:34:33 crc kubenswrapper[4861]: I0219 15:34:33.835880 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://42bdaaa988e3d709e147c606ea5831a8547288d81b47645909bdc0769d2fee21" gracePeriod=600 Feb 19 15:34:34 crc kubenswrapper[4861]: I0219 15:34:34.888149 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="42bdaaa988e3d709e147c606ea5831a8547288d81b47645909bdc0769d2fee21" exitCode=0 Feb 19 15:34:34 crc kubenswrapper[4861]: I0219 15:34:34.888226 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"42bdaaa988e3d709e147c606ea5831a8547288d81b47645909bdc0769d2fee21"} Feb 19 15:34:34 crc kubenswrapper[4861]: I0219 15:34:34.888862 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83"} Feb 19 15:34:34 crc kubenswrapper[4861]: I0219 15:34:34.888894 4861 scope.go:117] "RemoveContainer" containerID="98db6d963d362b2a070428e591c72e25891a9603f2f3e1be3216ab1074715193" Feb 19 15:34:39 crc kubenswrapper[4861]: I0219 15:34:39.949591 4861 generic.go:334] "Generic (PLEG): container finished" podID="da08be56-9fc3-4723-afec-e824bcea0208" containerID="4a68a72f61e271d91a5b0f7b82d8e7a799b14c7d37248c1a22cdc6298fac0ec7" exitCode=0 Feb 19 15:34:39 crc kubenswrapper[4861]: I0219 15:34:39.949736 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" event={"ID":"da08be56-9fc3-4723-afec-e824bcea0208","Type":"ContainerDied","Data":"4a68a72f61e271d91a5b0f7b82d8e7a799b14c7d37248c1a22cdc6298fac0ec7"} Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.466913 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.616352 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f27t\" (UniqueName: \"kubernetes.io/projected/da08be56-9fc3-4723-afec-e824bcea0208-kube-api-access-6f27t\") pod \"da08be56-9fc3-4723-afec-e824bcea0208\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.616678 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-ssh-key-openstack-cell1\") pod \"da08be56-9fc3-4723-afec-e824bcea0208\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.616771 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-combined-ca-bundle\") pod \"da08be56-9fc3-4723-afec-e824bcea0208\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.616909 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-agent-neutron-config-0\") pod \"da08be56-9fc3-4723-afec-e824bcea0208\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.616976 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-inventory\") pod \"da08be56-9fc3-4723-afec-e824bcea0208\" (UID: \"da08be56-9fc3-4723-afec-e824bcea0208\") " Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.623484 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da08be56-9fc3-4723-afec-e824bcea0208-kube-api-access-6f27t" (OuterVolumeSpecName: "kube-api-access-6f27t") pod "da08be56-9fc3-4723-afec-e824bcea0208" (UID: "da08be56-9fc3-4723-afec-e824bcea0208"). InnerVolumeSpecName "kube-api-access-6f27t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.625325 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "da08be56-9fc3-4723-afec-e824bcea0208" (UID: "da08be56-9fc3-4723-afec-e824bcea0208"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.649555 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "da08be56-9fc3-4723-afec-e824bcea0208" (UID: "da08be56-9fc3-4723-afec-e824bcea0208"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.662833 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-inventory" (OuterVolumeSpecName: "inventory") pod "da08be56-9fc3-4723-afec-e824bcea0208" (UID: "da08be56-9fc3-4723-afec-e824bcea0208"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.681623 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "da08be56-9fc3-4723-afec-e824bcea0208" (UID: "da08be56-9fc3-4723-afec-e824bcea0208"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.720762 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.720828 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f27t\" (UniqueName: \"kubernetes.io/projected/da08be56-9fc3-4723-afec-e824bcea0208-kube-api-access-6f27t\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.720850 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.720869 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.720891 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da08be56-9fc3-4723-afec-e824bcea0208-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.975570 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" event={"ID":"da08be56-9fc3-4723-afec-e824bcea0208","Type":"ContainerDied","Data":"b97b15013339b6f1e0c9099e191dad1e837745fdf5609bb4194fc7975934c544"} Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.975613 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b97b15013339b6f1e0c9099e191dad1e837745fdf5609bb4194fc7975934c544" Feb 19 15:34:41 crc kubenswrapper[4861]: I0219 15:34:41.975674 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xcjzx" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.148070 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4"] Feb 19 15:34:42 crc kubenswrapper[4861]: E0219 15:34:42.148604 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da08be56-9fc3-4723-afec-e824bcea0208" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.148628 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da08be56-9fc3-4723-afec-e824bcea0208" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.148920 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da08be56-9fc3-4723-afec-e824bcea0208" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.149818 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.153809 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.153857 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.153860 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.154034 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.154155 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.161841 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4"] Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.334012 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.334087 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.334134 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.334213 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.334262 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklzx\" (UniqueName: \"kubernetes.io/projected/a87fc82d-f819-4b14-8508-6ce6dae8eda5-kube-api-access-wklzx\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.436032 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wklzx\" (UniqueName: \"kubernetes.io/projected/a87fc82d-f819-4b14-8508-6ce6dae8eda5-kube-api-access-wklzx\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.436307 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.436374 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.436463 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.436535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.442134 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.446699 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.455560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.456200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.462078 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wklzx\" (UniqueName: \"kubernetes.io/projected/a87fc82d-f819-4b14-8508-6ce6dae8eda5-kube-api-access-wklzx\") pod \"neutron-dhcp-openstack-openstack-cell1-bs9v4\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:42 crc kubenswrapper[4861]: I0219 15:34:42.468031 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:34:43 crc kubenswrapper[4861]: I0219 15:34:43.066179 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4"] Feb 19 15:34:43 crc kubenswrapper[4861]: W0219 15:34:43.073264 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda87fc82d_f819_4b14_8508_6ce6dae8eda5.slice/crio-1a0ff10005b790f0b0a7c521e6ac8ab894c55a5d0fbac2d5efc3cbb4a9019b13 WatchSource:0}: Error finding container 1a0ff10005b790f0b0a7c521e6ac8ab894c55a5d0fbac2d5efc3cbb4a9019b13: Status 404 returned error can't find the container with id 1a0ff10005b790f0b0a7c521e6ac8ab894c55a5d0fbac2d5efc3cbb4a9019b13 Feb 19 15:34:44 crc kubenswrapper[4861]: I0219 15:34:44.008065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" event={"ID":"a87fc82d-f819-4b14-8508-6ce6dae8eda5","Type":"ContainerStarted","Data":"8d0b50d89265739607ae60d8f44c64db81be3b8ebe13e0e2a1772596cc72ab59"} Feb 19 15:34:44 crc kubenswrapper[4861]: I0219 15:34:44.008737 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" event={"ID":"a87fc82d-f819-4b14-8508-6ce6dae8eda5","Type":"ContainerStarted","Data":"1a0ff10005b790f0b0a7c521e6ac8ab894c55a5d0fbac2d5efc3cbb4a9019b13"} Feb 19 15:34:44 crc kubenswrapper[4861]: I0219 15:34:44.028442 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" podStartSLOduration=1.630302218 podStartE2EDuration="2.028405905s" podCreationTimestamp="2026-02-19 15:34:42 +0000 UTC" firstStartedPulling="2026-02-19 15:34:43.07556102 +0000 UTC m=+8697.736664248" lastFinishedPulling="2026-02-19 15:34:43.473664677 +0000 UTC m=+8698.134767935" observedRunningTime="2026-02-19 15:34:44.024623663 +0000 UTC m=+8698.685726901" watchObservedRunningTime="2026-02-19 15:34:44.028405905 +0000 UTC m=+8698.689509133" Feb 19 15:36:05 crc kubenswrapper[4861]: I0219 15:36:05.072594 4861 generic.go:334] "Generic (PLEG): container finished" podID="a87fc82d-f819-4b14-8508-6ce6dae8eda5" containerID="8d0b50d89265739607ae60d8f44c64db81be3b8ebe13e0e2a1772596cc72ab59" exitCode=0 Feb 19 15:36:05 crc kubenswrapper[4861]: I0219 15:36:05.073007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" event={"ID":"a87fc82d-f819-4b14-8508-6ce6dae8eda5","Type":"ContainerDied","Data":"8d0b50d89265739607ae60d8f44c64db81be3b8ebe13e0e2a1772596cc72ab59"} Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.600570 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.738397 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-ssh-key-openstack-cell1\") pod \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.738497 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-combined-ca-bundle\") pod \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.738751 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-inventory\") pod \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.738840 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-agent-neutron-config-0\") pod \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.738995 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wklzx\" (UniqueName: \"kubernetes.io/projected/a87fc82d-f819-4b14-8508-6ce6dae8eda5-kube-api-access-wklzx\") pod \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\" (UID: \"a87fc82d-f819-4b14-8508-6ce6dae8eda5\") " Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.747688 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87fc82d-f819-4b14-8508-6ce6dae8eda5-kube-api-access-wklzx" (OuterVolumeSpecName: "kube-api-access-wklzx") pod "a87fc82d-f819-4b14-8508-6ce6dae8eda5" (UID: "a87fc82d-f819-4b14-8508-6ce6dae8eda5"). InnerVolumeSpecName "kube-api-access-wklzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.747698 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "a87fc82d-f819-4b14-8508-6ce6dae8eda5" (UID: "a87fc82d-f819-4b14-8508-6ce6dae8eda5"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.777995 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "a87fc82d-f819-4b14-8508-6ce6dae8eda5" (UID: "a87fc82d-f819-4b14-8508-6ce6dae8eda5"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.778593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-inventory" (OuterVolumeSpecName: "inventory") pod "a87fc82d-f819-4b14-8508-6ce6dae8eda5" (UID: "a87fc82d-f819-4b14-8508-6ce6dae8eda5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.781765 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "a87fc82d-f819-4b14-8508-6ce6dae8eda5" (UID: "a87fc82d-f819-4b14-8508-6ce6dae8eda5"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.842541 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.842591 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.842616 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wklzx\" (UniqueName: \"kubernetes.io/projected/a87fc82d-f819-4b14-8508-6ce6dae8eda5-kube-api-access-wklzx\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.842637 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:06 crc kubenswrapper[4861]: I0219 15:36:06.842655 4861 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87fc82d-f819-4b14-8508-6ce6dae8eda5-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:07 crc kubenswrapper[4861]: I0219 15:36:07.132745 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" event={"ID":"a87fc82d-f819-4b14-8508-6ce6dae8eda5","Type":"ContainerDied","Data":"1a0ff10005b790f0b0a7c521e6ac8ab894c55a5d0fbac2d5efc3cbb4a9019b13"} Feb 19 15:36:07 crc kubenswrapper[4861]: I0219 15:36:07.132974 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-bs9v4" Feb 19 15:36:07 crc kubenswrapper[4861]: I0219 15:36:07.133762 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0ff10005b790f0b0a7c521e6ac8ab894c55a5d0fbac2d5efc3cbb4a9019b13" Feb 19 15:36:11 crc kubenswrapper[4861]: I0219 15:36:11.186007 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:36:11 crc kubenswrapper[4861]: I0219 15:36:11.186729 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="31da7774-969b-45e2-ba07-b6dbdd3a97d7" containerName="nova-cell0-conductor-conductor" containerID="cri-o://5faea9d8129b450ddc3f1ba1a103f983d654d8f56a3e5746dedabffb98051355" gracePeriod=30 Feb 19 15:36:11 crc kubenswrapper[4861]: I0219 15:36:11.213833 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:36:11 crc kubenswrapper[4861]: I0219 15:36:11.214348 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="31cf5b71-d287-40a5-80a6-95e490e99f1b" containerName="nova-cell1-conductor-conductor" containerID="cri-o://a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" gracePeriod=30 Feb 19 15:36:11 crc kubenswrapper[4861]: E0219 15:36:11.504134 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 15:36:11 crc kubenswrapper[4861]: E0219 15:36:11.505658 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 15:36:11 crc kubenswrapper[4861]: E0219 15:36:11.507158 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 15:36:11 crc kubenswrapper[4861]: E0219 15:36:11.507274 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="31cf5b71-d287-40a5-80a6-95e490e99f1b" containerName="nova-cell1-conductor-conductor" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.213253 4861 generic.go:334] "Generic (PLEG): container finished" podID="31da7774-969b-45e2-ba07-b6dbdd3a97d7" containerID="5faea9d8129b450ddc3f1ba1a103f983d654d8f56a3e5746dedabffb98051355" exitCode=0 Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.213350 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31da7774-969b-45e2-ba07-b6dbdd3a97d7","Type":"ContainerDied","Data":"5faea9d8129b450ddc3f1ba1a103f983d654d8f56a3e5746dedabffb98051355"} Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.213640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31da7774-969b-45e2-ba07-b6dbdd3a97d7","Type":"ContainerDied","Data":"a68c548aae866ae7afd3c3a5f0f0f90298c2538df99e94de3da5a3fc46c35cd3"} Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.213671 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68c548aae866ae7afd3c3a5f0f0f90298c2538df99e94de3da5a3fc46c35cd3" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.247806 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.262536 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn"] Feb 19 15:36:12 crc kubenswrapper[4861]: E0219 15:36:12.262959 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31da7774-969b-45e2-ba07-b6dbdd3a97d7" containerName="nova-cell0-conductor-conductor" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.262980 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="31da7774-969b-45e2-ba07-b6dbdd3a97d7" containerName="nova-cell0-conductor-conductor" Feb 19 15:36:12 crc kubenswrapper[4861]: E0219 15:36:12.263011 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87fc82d-f819-4b14-8508-6ce6dae8eda5" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.263019 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87fc82d-f819-4b14-8508-6ce6dae8eda5" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.263271 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="31da7774-969b-45e2-ba07-b6dbdd3a97d7" containerName="nova-cell0-conductor-conductor" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.263292 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87fc82d-f819-4b14-8508-6ce6dae8eda5" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.264005 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.267160 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jgxb2" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.267192 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.267389 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.267589 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.268712 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.270106 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.270898 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.279802 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn"] Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.375775 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-config-data\") pod \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwbq5\" (UniqueName: \"kubernetes.io/projected/31da7774-969b-45e2-ba07-b6dbdd3a97d7-kube-api-access-hwbq5\") pod \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376291 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-combined-ca-bundle\") pod \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\" (UID: \"31da7774-969b-45e2-ba07-b6dbdd3a97d7\") " Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376679 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376724 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376751 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376791 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/1ba6ecf0-8541-46e2-b17e-46cc3491f870-kube-api-access-9s82l\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.376844 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.377508 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.377909 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.378069 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.378258 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.378311 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.378661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.399990 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31da7774-969b-45e2-ba07-b6dbdd3a97d7-kube-api-access-hwbq5" (OuterVolumeSpecName: "kube-api-access-hwbq5") pod "31da7774-969b-45e2-ba07-b6dbdd3a97d7" (UID: "31da7774-969b-45e2-ba07-b6dbdd3a97d7"). InnerVolumeSpecName "kube-api-access-hwbq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.407740 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31da7774-969b-45e2-ba07-b6dbdd3a97d7" (UID: "31da7774-969b-45e2-ba07-b6dbdd3a97d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.466376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-config-data" (OuterVolumeSpecName: "config-data") pod "31da7774-969b-45e2-ba07-b6dbdd3a97d7" (UID: "31da7774-969b-45e2-ba07-b6dbdd3a97d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.480944 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.480997 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481021 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481047 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/1ba6ecf0-8541-46e2-b17e-46cc3491f870-kube-api-access-9s82l\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481112 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481145 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481198 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481232 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481296 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481367 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481379 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31da7774-969b-45e2-ba07-b6dbdd3a97d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.481388 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwbq5\" (UniqueName: \"kubernetes.io/projected/31da7774-969b-45e2-ba07-b6dbdd3a97d7-kube-api-access-hwbq5\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.483606 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.492661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.492667 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.492937 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerName="nova-scheduler-scheduler" containerID="cri-o://a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" gracePeriod=30 Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.493280 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.494036 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.494781 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.496651 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.499078 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.499265 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.500405 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.513598 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.513706 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/1ba6ecf0-8541-46e2-b17e-46cc3491f870-kube-api-access-9s82l\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.520234 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.520505 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-log" containerID="cri-o://c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5" gracePeriod=30 Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.520661 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-api" containerID="cri-o://1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506" gracePeriod=30 Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.578815 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.579136 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.579368 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-log" containerID="cri-o://0ae430aca406b076ff240e8e321d34e9f036978501f0873df65e18a181e69a0d" gracePeriod=30 Feb 19 15:36:12 crc kubenswrapper[4861]: I0219 15:36:12.579549 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-metadata" containerID="cri-o://2ee7872414ad102175614a53fce2d14b1bad996ed101f0adbb7c26850904e7e8" gracePeriod=30 Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.023197 4861 scope.go:117] "RemoveContainer" containerID="5faea9d8129b450ddc3f1ba1a103f983d654d8f56a3e5746dedabffb98051355" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.232929 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn"] Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.243701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" event={"ID":"1ba6ecf0-8541-46e2-b17e-46cc3491f870","Type":"ContainerStarted","Data":"aeab29b3d7466d8888e3ef765712bd45ba68fed39d05085a5f6bca50f402dac6"} Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.248355 4861 generic.go:334] "Generic (PLEG): container finished" podID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerID="c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5" exitCode=143 Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.248489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a686dc0-b187-47d8-a90c-1db5eea1d4e7","Type":"ContainerDied","Data":"c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5"} Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.250061 4861 generic.go:334] "Generic (PLEG): container finished" podID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerID="0ae430aca406b076ff240e8e321d34e9f036978501f0873df65e18a181e69a0d" exitCode=143 Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.250106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb9f00e8-70f3-49da-a7f9-46fecf68a76d","Type":"ContainerDied","Data":"0ae430aca406b076ff240e8e321d34e9f036978501f0873df65e18a181e69a0d"} Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.250123 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.352623 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.366768 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.378593 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.380371 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.382536 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.389234 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.556120 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7253b71a-0900-4e57-8d58-4e935ace7b4a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.556181 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpbqt\" (UniqueName: \"kubernetes.io/projected/7253b71a-0900-4e57-8d58-4e935ace7b4a-kube-api-access-xpbqt\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.556207 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7253b71a-0900-4e57-8d58-4e935ace7b4a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.657958 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7253b71a-0900-4e57-8d58-4e935ace7b4a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.658276 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpbqt\" (UniqueName: \"kubernetes.io/projected/7253b71a-0900-4e57-8d58-4e935ace7b4a-kube-api-access-xpbqt\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.658302 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7253b71a-0900-4e57-8d58-4e935ace7b4a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.663664 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7253b71a-0900-4e57-8d58-4e935ace7b4a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.664029 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7253b71a-0900-4e57-8d58-4e935ace7b4a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.677042 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.680245 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpbqt\" (UniqueName: \"kubernetes.io/projected/7253b71a-0900-4e57-8d58-4e935ace7b4a-kube-api-access-xpbqt\") pod \"nova-cell0-conductor-0\" (UID: \"7253b71a-0900-4e57-8d58-4e935ace7b4a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.703580 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.760579 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-combined-ca-bundle\") pod \"31cf5b71-d287-40a5-80a6-95e490e99f1b\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.760695 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-config-data\") pod \"31cf5b71-d287-40a5-80a6-95e490e99f1b\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.761022 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljsct\" (UniqueName: \"kubernetes.io/projected/31cf5b71-d287-40a5-80a6-95e490e99f1b-kube-api-access-ljsct\") pod \"31cf5b71-d287-40a5-80a6-95e490e99f1b\" (UID: \"31cf5b71-d287-40a5-80a6-95e490e99f1b\") " Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.766285 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31cf5b71-d287-40a5-80a6-95e490e99f1b-kube-api-access-ljsct" (OuterVolumeSpecName: "kube-api-access-ljsct") pod "31cf5b71-d287-40a5-80a6-95e490e99f1b" (UID: "31cf5b71-d287-40a5-80a6-95e490e99f1b"). InnerVolumeSpecName "kube-api-access-ljsct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.804293 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31cf5b71-d287-40a5-80a6-95e490e99f1b" (UID: "31cf5b71-d287-40a5-80a6-95e490e99f1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.863468 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-config-data" (OuterVolumeSpecName: "config-data") pod "31cf5b71-d287-40a5-80a6-95e490e99f1b" (UID: "31cf5b71-d287-40a5-80a6-95e490e99f1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.864299 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljsct\" (UniqueName: \"kubernetes.io/projected/31cf5b71-d287-40a5-80a6-95e490e99f1b-kube-api-access-ljsct\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.864665 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:13 crc kubenswrapper[4861]: I0219 15:36:13.864807 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cf5b71-d287-40a5-80a6-95e490e99f1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:13 crc kubenswrapper[4861]: E0219 15:36:13.984111 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:36:13 crc kubenswrapper[4861]: E0219 15:36:13.988590 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:36:13 crc kubenswrapper[4861]: E0219 15:36:13.990654 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:36:13 crc kubenswrapper[4861]: E0219 15:36:13.990748 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerName="nova-scheduler-scheduler" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.000345 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31da7774-969b-45e2-ba07-b6dbdd3a97d7" path="/var/lib/kubelet/pods/31da7774-969b-45e2-ba07-b6dbdd3a97d7/volumes" Feb 19 15:36:14 crc kubenswrapper[4861]: W0219 15:36:14.233199 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7253b71a_0900_4e57_8d58_4e935ace7b4a.slice/crio-2897ff503795d073978d7fcd6720d0e34d9dad4766dd0f011b1f3eee62aaa645 WatchSource:0}: Error finding container 2897ff503795d073978d7fcd6720d0e34d9dad4766dd0f011b1f3eee62aaa645: Status 404 returned error can't find the container with id 2897ff503795d073978d7fcd6720d0e34d9dad4766dd0f011b1f3eee62aaa645 Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.235821 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.261806 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" event={"ID":"1ba6ecf0-8541-46e2-b17e-46cc3491f870","Type":"ContainerStarted","Data":"b5065d23681ebada6b990844c1f7081ff013f7e87e0e6ba5b83cacfb267dc8c8"} Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.264026 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7253b71a-0900-4e57-8d58-4e935ace7b4a","Type":"ContainerStarted","Data":"2897ff503795d073978d7fcd6720d0e34d9dad4766dd0f011b1f3eee62aaa645"} Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.272064 4861 generic.go:334] "Generic (PLEG): container finished" podID="31cf5b71-d287-40a5-80a6-95e490e99f1b" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" exitCode=0 Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.272108 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31cf5b71-d287-40a5-80a6-95e490e99f1b","Type":"ContainerDied","Data":"a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2"} Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.272133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31cf5b71-d287-40a5-80a6-95e490e99f1b","Type":"ContainerDied","Data":"f4172e2aea184a620ea51a4a42c322c91fbe7e3360b1212066cbd35f093759a8"} Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.272149 4861 scope.go:117] "RemoveContainer" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.272247 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.298293 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" podStartSLOduration=1.777247378 podStartE2EDuration="2.298253909s" podCreationTimestamp="2026-02-19 15:36:12 +0000 UTC" firstStartedPulling="2026-02-19 15:36:13.219554129 +0000 UTC m=+8787.880657357" lastFinishedPulling="2026-02-19 15:36:13.74056066 +0000 UTC m=+8788.401663888" observedRunningTime="2026-02-19 15:36:14.288335632 +0000 UTC m=+8788.949438880" watchObservedRunningTime="2026-02-19 15:36:14.298253909 +0000 UTC m=+8788.959357137" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.320775 4861 scope.go:117] "RemoveContainer" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" Feb 19 15:36:14 crc kubenswrapper[4861]: E0219 15:36:14.322370 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2\": container with ID starting with a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2 not found: ID does not exist" containerID="a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.322428 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2"} err="failed to get container status \"a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2\": rpc error: code = NotFound desc = could not find container \"a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2\": container with ID starting with a973163f15bc1003fe88add383a038bf20ba52e31b04c09698206dc99a541ef2 not found: ID does not exist" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.339638 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.358196 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.367693 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:36:14 crc kubenswrapper[4861]: E0219 15:36:14.368185 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31cf5b71-d287-40a5-80a6-95e490e99f1b" containerName="nova-cell1-conductor-conductor" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.368201 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="31cf5b71-d287-40a5-80a6-95e490e99f1b" containerName="nova-cell1-conductor-conductor" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.368394 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="31cf5b71-d287-40a5-80a6-95e490e99f1b" containerName="nova-cell1-conductor-conductor" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.372023 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.374827 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.390678 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.475198 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19397b58-b94c-4cbb-9026-e994604290e0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.475371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19397b58-b94c-4cbb-9026-e994604290e0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.477988 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xg8n\" (UniqueName: \"kubernetes.io/projected/19397b58-b94c-4cbb-9026-e994604290e0-kube-api-access-8xg8n\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.580285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19397b58-b94c-4cbb-9026-e994604290e0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.580752 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19397b58-b94c-4cbb-9026-e994604290e0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.580787 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xg8n\" (UniqueName: \"kubernetes.io/projected/19397b58-b94c-4cbb-9026-e994604290e0-kube-api-access-8xg8n\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.589357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19397b58-b94c-4cbb-9026-e994604290e0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.591700 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19397b58-b94c-4cbb-9026-e994604290e0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.599724 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xg8n\" (UniqueName: \"kubernetes.io/projected/19397b58-b94c-4cbb-9026-e994604290e0-kube-api-access-8xg8n\") pod \"nova-cell1-conductor-0\" (UID: \"19397b58-b94c-4cbb-9026-e994604290e0\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:14 crc kubenswrapper[4861]: I0219 15:36:14.735702 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.271986 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.283701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7253b71a-0900-4e57-8d58-4e935ace7b4a","Type":"ContainerStarted","Data":"43080d8c8a245cba6ce161a197bf9ef71bb32c3f32853cbf83986278c2b4cb8d"} Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.284121 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.316954 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.316936543 podStartE2EDuration="2.316936543s" podCreationTimestamp="2026-02-19 15:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:36:15.307792798 +0000 UTC m=+8789.968896036" watchObservedRunningTime="2026-02-19 15:36:15.316936543 +0000 UTC m=+8789.978039771" Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.767352 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.102:8774/\": read tcp 10.217.0.2:51908->10.217.1.102:8774: read: connection reset by peer" Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.767561 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.102:8774/\": read tcp 10.217.0.2:51906->10.217.1.102:8774: read: connection reset by peer" Feb 19 15:36:15 crc kubenswrapper[4861]: I0219 15:36:15.992229 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31cf5b71-d287-40a5-80a6-95e490e99f1b" path="/var/lib/kubelet/pods/31cf5b71-d287-40a5-80a6-95e490e99f1b/volumes" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.035733 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": read tcp 10.217.0.2:60836->10.217.1.98:8775: read: connection reset by peer" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.035752 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": read tcp 10.217.0.2:60832->10.217.1.98:8775: read: connection reset by peer" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.272690 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.313539 4861 generic.go:334] "Generic (PLEG): container finished" podID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerID="1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506" exitCode=0 Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.313618 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a686dc0-b187-47d8-a90c-1db5eea1d4e7","Type":"ContainerDied","Data":"1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506"} Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.313651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a686dc0-b187-47d8-a90c-1db5eea1d4e7","Type":"ContainerDied","Data":"109dc8ec52d99792d59e09e3a65ebe5fab6776f1142f83d181e0c814b2511280"} Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.313670 4861 scope.go:117] "RemoveContainer" containerID="1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.313832 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.322718 4861 generic.go:334] "Generic (PLEG): container finished" podID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerID="2ee7872414ad102175614a53fce2d14b1bad996ed101f0adbb7c26850904e7e8" exitCode=0 Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.322794 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb9f00e8-70f3-49da-a7f9-46fecf68a76d","Type":"ContainerDied","Data":"2ee7872414ad102175614a53fce2d14b1bad996ed101f0adbb7c26850904e7e8"} Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.326102 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"19397b58-b94c-4cbb-9026-e994604290e0","Type":"ContainerStarted","Data":"fa7e86281e261eb508197be90ce2c6d3642fed9f47d8562a129e30300bc49212"} Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.326133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"19397b58-b94c-4cbb-9026-e994604290e0","Type":"ContainerStarted","Data":"e62b972e5e5290c37735441fec577c96d0a49873956f8fbeb22e0d889bd78d93"} Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.326171 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.354062 4861 scope.go:117] "RemoveContainer" containerID="c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.384034 4861 scope.go:117] "RemoveContainer" containerID="1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506" Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.386030 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506\": container with ID starting with 1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506 not found: ID does not exist" containerID="1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.386075 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506"} err="failed to get container status \"1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506\": rpc error: code = NotFound desc = could not find container \"1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506\": container with ID starting with 1e39482c95424a0d7e351847d2c48e042d30656bed1da08875c082851593b506 not found: ID does not exist" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.386100 4861 scope.go:117] "RemoveContainer" containerID="c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5" Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.386579 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5\": container with ID starting with c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5 not found: ID does not exist" containerID="c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.386601 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5"} err="failed to get container status \"c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5\": rpc error: code = NotFound desc = could not find container \"c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5\": container with ID starting with c0f2b523daf180bf1973cf992a132f05c495761dfd661d1df189c080addb70a5 not found: ID does not exist" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.431970 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2wxb\" (UniqueName: \"kubernetes.io/projected/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-kube-api-access-s2wxb\") pod \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.432033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-internal-tls-certs\") pod \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.432113 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-config-data\") pod \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.432191 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-logs\") pod \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.432301 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-combined-ca-bundle\") pod \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.432362 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-public-tls-certs\") pod \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\" (UID: \"6a686dc0-b187-47d8-a90c-1db5eea1d4e7\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.434222 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-logs" (OuterVolumeSpecName: "logs") pod "6a686dc0-b187-47d8-a90c-1db5eea1d4e7" (UID: "6a686dc0-b187-47d8-a90c-1db5eea1d4e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.450913 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-kube-api-access-s2wxb" (OuterVolumeSpecName: "kube-api-access-s2wxb") pod "6a686dc0-b187-47d8-a90c-1db5eea1d4e7" (UID: "6a686dc0-b187-47d8-a90c-1db5eea1d4e7"). InnerVolumeSpecName "kube-api-access-s2wxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.474498 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a686dc0-b187-47d8-a90c-1db5eea1d4e7" (UID: "6a686dc0-b187-47d8-a90c-1db5eea1d4e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.494810 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-config-data" (OuterVolumeSpecName: "config-data") pod "6a686dc0-b187-47d8-a90c-1db5eea1d4e7" (UID: "6a686dc0-b187-47d8-a90c-1db5eea1d4e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.514061 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6a686dc0-b187-47d8-a90c-1db5eea1d4e7" (UID: "6a686dc0-b187-47d8-a90c-1db5eea1d4e7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.525406 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.525384421 podStartE2EDuration="2.525384421s" podCreationTimestamp="2026-02-19 15:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:36:16.373042765 +0000 UTC m=+8791.034146003" watchObservedRunningTime="2026-02-19 15:36:16.525384421 +0000 UTC m=+8791.186487639" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.532300 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hfnfm"] Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.532863 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-log" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.532884 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-log" Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.532951 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-api" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.532960 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-api" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.533221 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-api" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.538449 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2wxb\" (UniqueName: \"kubernetes.io/projected/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-kube-api-access-s2wxb\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.538484 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.538495 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.538503 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.538512 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.548527 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" containerName="nova-api-log" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.550636 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.554977 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfnfm"] Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.556996 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6a686dc0-b187-47d8-a90c-1db5eea1d4e7" (UID: "6a686dc0-b187-47d8-a90c-1db5eea1d4e7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.561723 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.641773 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a686dc0-b187-47d8-a90c-1db5eea1d4e7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.669795 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.695097 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.707836 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.708260 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-log" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.708273 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-log" Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.708293 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-metadata" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.708300 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-metadata" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.708631 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-metadata" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.708658 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" containerName="nova-metadata-log" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.710093 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.714124 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.714137 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.714302 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.718010 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.744265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-nova-metadata-tls-certs\") pod \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.744568 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-logs\") pod \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.744802 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhxj9\" (UniqueName: \"kubernetes.io/projected/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-kube-api-access-zhxj9\") pod \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.744907 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-combined-ca-bundle\") pod \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.745127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-config-data\") pod \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\" (UID: \"fb9f00e8-70f3-49da-a7f9-46fecf68a76d\") " Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.745937 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-logs" (OuterVolumeSpecName: "logs") pod "fb9f00e8-70f3-49da-a7f9-46fecf68a76d" (UID: "fb9f00e8-70f3-49da-a7f9-46fecf68a76d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749365 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749492 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-config-data\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749532 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsbl\" (UniqueName: \"kubernetes.io/projected/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-kube-api-access-slsbl\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749567 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d5770e-d9f2-4b84-9700-f1205577fd3d-logs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749650 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-utilities\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749683 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-catalog-content\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749732 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749842 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4l4k\" (UniqueName: \"kubernetes.io/projected/34d5770e-d9f2-4b84-9700-f1205577fd3d-kube-api-access-k4l4k\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749941 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-public-tls-certs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.749994 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.752790 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-kube-api-access-zhxj9" (OuterVolumeSpecName: "kube-api-access-zhxj9") pod "fb9f00e8-70f3-49da-a7f9-46fecf68a76d" (UID: "fb9f00e8-70f3-49da-a7f9-46fecf68a76d"). InnerVolumeSpecName "kube-api-access-zhxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.791304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-config-data" (OuterVolumeSpecName: "config-data") pod "fb9f00e8-70f3-49da-a7f9-46fecf68a76d" (UID: "fb9f00e8-70f3-49da-a7f9-46fecf68a76d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.833158 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb9f00e8-70f3-49da-a7f9-46fecf68a76d" (UID: "fb9f00e8-70f3-49da-a7f9-46fecf68a76d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.853110 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.853843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4l4k\" (UniqueName: \"kubernetes.io/projected/34d5770e-d9f2-4b84-9700-f1205577fd3d-kube-api-access-k4l4k\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.853913 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-public-tls-certs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.853949 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.853993 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-config-data\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854022 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsbl\" (UniqueName: \"kubernetes.io/projected/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-kube-api-access-slsbl\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854050 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d5770e-d9f2-4b84-9700-f1205577fd3d-logs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854124 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-utilities\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-catalog-content\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854219 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854231 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhxj9\" (UniqueName: \"kubernetes.io/projected/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-kube-api-access-zhxj9\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854255 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.854639 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-catalog-content\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.855767 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d5770e-d9f2-4b84-9700-f1205577fd3d-logs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.856002 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-utilities\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.856766 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fb9f00e8-70f3-49da-a7f9-46fecf68a76d" (UID: "fb9f00e8-70f3-49da-a7f9-46fecf68a76d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.857152 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.859674 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-public-tls-certs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.863546 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-config-data\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.863908 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5770e-d9f2-4b84-9700-f1205577fd3d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.875173 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4l4k\" (UniqueName: \"kubernetes.io/projected/34d5770e-d9f2-4b84-9700-f1205577fd3d-kube-api-access-k4l4k\") pod \"nova-api-0\" (UID: \"34d5770e-d9f2-4b84-9700-f1205577fd3d\") " pod="openstack/nova-api-0" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.879052 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsbl\" (UniqueName: \"kubernetes.io/projected/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-kube-api-access-slsbl\") pod \"redhat-marketplace-hfnfm\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.891799 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:16 crc kubenswrapper[4861]: E0219 15:36:16.942874 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a686dc0_b187_47d8_a90c_1db5eea1d4e7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a686dc0_b187_47d8_a90c_1db5eea1d4e7.slice/crio-109dc8ec52d99792d59e09e3a65ebe5fab6776f1142f83d181e0c814b2511280\": RecentStats: unable to find data in memory cache]" Feb 19 15:36:16 crc kubenswrapper[4861]: I0219 15:36:16.956349 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb9f00e8-70f3-49da-a7f9-46fecf68a76d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.031562 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.348003 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.348512 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb9f00e8-70f3-49da-a7f9-46fecf68a76d","Type":"ContainerDied","Data":"6836e1f469b0cc2c7dfa913b241354c5d53862492093ad939100a34b5675f714"} Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.348551 4861 scope.go:117] "RemoveContainer" containerID="2ee7872414ad102175614a53fce2d14b1bad996ed101f0adbb7c26850904e7e8" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.414554 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.421165 4861 scope.go:117] "RemoveContainer" containerID="0ae430aca406b076ff240e8e321d34e9f036978501f0873df65e18a181e69a0d" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.436881 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.457619 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.474884 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.475619 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.480890 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.481137 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.527620 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfnfm"] Feb 19 15:36:17 crc kubenswrapper[4861]: W0219 15:36:17.561694 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod012ccca4_61b0_4a0b_adc5_52a4705e3fb2.slice/crio-aaf95feee47755c21a5c6acce6d9c7362699ad949a40fb16fc7862b6a3776db2 WatchSource:0}: Error finding container aaf95feee47755c21a5c6acce6d9c7362699ad949a40fb16fc7862b6a3776db2: Status 404 returned error can't find the container with id aaf95feee47755c21a5c6acce6d9c7362699ad949a40fb16fc7862b6a3776db2 Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.580073 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.580196 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0490c4d-3f19-4ef9-a74d-db926199c4c2-logs\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.580246 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.580276 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-config-data\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.580432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cw5\" (UniqueName: \"kubernetes.io/projected/f0490c4d-3f19-4ef9-a74d-db926199c4c2-kube-api-access-h5cw5\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.667105 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:36:17 crc kubenswrapper[4861]: W0219 15:36:17.669336 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34d5770e_d9f2_4b84_9700_f1205577fd3d.slice/crio-a2370162b799d6fa8123756cee498c65d38154f9473622f281bf9779a4b12c5b WatchSource:0}: Error finding container a2370162b799d6fa8123756cee498c65d38154f9473622f281bf9779a4b12c5b: Status 404 returned error can't find the container with id a2370162b799d6fa8123756cee498c65d38154f9473622f281bf9779a4b12c5b Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.682948 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.683073 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0490c4d-3f19-4ef9-a74d-db926199c4c2-logs\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.683125 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.683151 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-config-data\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.683238 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5cw5\" (UniqueName: \"kubernetes.io/projected/f0490c4d-3f19-4ef9-a74d-db926199c4c2-kube-api-access-h5cw5\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.687919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0490c4d-3f19-4ef9-a74d-db926199c4c2-logs\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.690792 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-config-data\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.691239 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.691288 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0490c4d-3f19-4ef9-a74d-db926199c4c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.722272 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5cw5\" (UniqueName: \"kubernetes.io/projected/f0490c4d-3f19-4ef9-a74d-db926199c4c2-kube-api-access-h5cw5\") pod \"nova-metadata-0\" (UID: \"f0490c4d-3f19-4ef9-a74d-db926199c4c2\") " pod="openstack/nova-metadata-0" Feb 19 15:36:17 crc kubenswrapper[4861]: I0219 15:36:17.806282 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.003874 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a686dc0-b187-47d8-a90c-1db5eea1d4e7" path="/var/lib/kubelet/pods/6a686dc0-b187-47d8-a90c-1db5eea1d4e7/volumes" Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.006107 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9f00e8-70f3-49da-a7f9-46fecf68a76d" path="/var/lib/kubelet/pods/fb9f00e8-70f3-49da-a7f9-46fecf68a76d/volumes" Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.332321 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.367394 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34d5770e-d9f2-4b84-9700-f1205577fd3d","Type":"ContainerStarted","Data":"146b14c54b7f503a1893c7e56f08fe27724b0e94be92a1e973ad785c7c817cb9"} Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.367461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34d5770e-d9f2-4b84-9700-f1205577fd3d","Type":"ContainerStarted","Data":"339dd6db7a3985f42271e004e1dc58858583612402a81a2e5ed50156e1341da6"} Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.367474 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34d5770e-d9f2-4b84-9700-f1205577fd3d","Type":"ContainerStarted","Data":"a2370162b799d6fa8123756cee498c65d38154f9473622f281bf9779a4b12c5b"} Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.375671 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0490c4d-3f19-4ef9-a74d-db926199c4c2","Type":"ContainerStarted","Data":"28bc8a742707560e6f1166d38c4b7de5bb0b0598341fbc04cfcc0c3606c181af"} Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.378978 4861 generic.go:334] "Generic (PLEG): container finished" podID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerID="c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff" exitCode=0 Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.379085 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerDied","Data":"c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff"} Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.379130 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerStarted","Data":"aaf95feee47755c21a5c6acce6d9c7362699ad949a40fb16fc7862b6a3776db2"} Feb 19 15:36:18 crc kubenswrapper[4861]: I0219 15:36:18.389358 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.389334968 podStartE2EDuration="2.389334968s" podCreationTimestamp="2026-02-19 15:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:36:18.388924007 +0000 UTC m=+8793.050027225" watchObservedRunningTime="2026-02-19 15:36:18.389334968 +0000 UTC m=+8793.050438196" Feb 19 15:36:18 crc kubenswrapper[4861]: E0219 15:36:18.984193 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:36:18 crc kubenswrapper[4861]: E0219 15:36:18.986849 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:36:18 crc kubenswrapper[4861]: E0219 15:36:18.990747 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:36:18 crc kubenswrapper[4861]: E0219 15:36:18.990829 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerName="nova-scheduler-scheduler" Feb 19 15:36:19 crc kubenswrapper[4861]: I0219 15:36:19.394302 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0490c4d-3f19-4ef9-a74d-db926199c4c2","Type":"ContainerStarted","Data":"ebbc195971bf502651bbc26be7dde20cc4e780ea34a354a5c9babfc2cecc956f"} Feb 19 15:36:19 crc kubenswrapper[4861]: I0219 15:36:19.394689 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0490c4d-3f19-4ef9-a74d-db926199c4c2","Type":"ContainerStarted","Data":"d743d40ab9950e1b19211c53e4d7977a2f2ddc1059385df26ea7d1c223dc802e"} Feb 19 15:36:19 crc kubenswrapper[4861]: I0219 15:36:19.397905 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerStarted","Data":"3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c"} Feb 19 15:36:19 crc kubenswrapper[4861]: I0219 15:36:19.419306 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.419288536 podStartE2EDuration="2.419288536s" podCreationTimestamp="2026-02-19 15:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:36:19.412062322 +0000 UTC m=+8794.073165550" watchObservedRunningTime="2026-02-19 15:36:19.419288536 +0000 UTC m=+8794.080391754" Feb 19 15:36:20 crc kubenswrapper[4861]: I0219 15:36:20.415789 4861 generic.go:334] "Generic (PLEG): container finished" podID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerID="3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c" exitCode=0 Feb 19 15:36:20 crc kubenswrapper[4861]: I0219 15:36:20.415872 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerDied","Data":"3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c"} Feb 19 15:36:21 crc kubenswrapper[4861]: I0219 15:36:21.429477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerStarted","Data":"d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11"} Feb 19 15:36:21 crc kubenswrapper[4861]: I0219 15:36:21.476449 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hfnfm" podStartSLOduration=2.996972229 podStartE2EDuration="5.476397797s" podCreationTimestamp="2026-02-19 15:36:16 +0000 UTC" firstStartedPulling="2026-02-19 15:36:18.385596168 +0000 UTC m=+8793.046699396" lastFinishedPulling="2026-02-19 15:36:20.865021726 +0000 UTC m=+8795.526124964" observedRunningTime="2026-02-19 15:36:21.457798747 +0000 UTC m=+8796.118901975" watchObservedRunningTime="2026-02-19 15:36:21.476397797 +0000 UTC m=+8796.137501065" Feb 19 15:36:22 crc kubenswrapper[4861]: I0219 15:36:22.807391 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:36:22 crc kubenswrapper[4861]: I0219 15:36:22.807869 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.377242 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.419060 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5r6d\" (UniqueName: \"kubernetes.io/projected/4fd2c176-d104-4058-9b92-db8937b2fa68-kube-api-access-b5r6d\") pod \"4fd2c176-d104-4058-9b92-db8937b2fa68\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.419180 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-combined-ca-bundle\") pod \"4fd2c176-d104-4058-9b92-db8937b2fa68\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.419410 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-config-data\") pod \"4fd2c176-d104-4058-9b92-db8937b2fa68\" (UID: \"4fd2c176-d104-4058-9b92-db8937b2fa68\") " Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.429804 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd2c176-d104-4058-9b92-db8937b2fa68-kube-api-access-b5r6d" (OuterVolumeSpecName: "kube-api-access-b5r6d") pod "4fd2c176-d104-4058-9b92-db8937b2fa68" (UID: "4fd2c176-d104-4058-9b92-db8937b2fa68"). InnerVolumeSpecName "kube-api-access-b5r6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.453639 4861 generic.go:334] "Generic (PLEG): container finished" podID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" exitCode=0 Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.453694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4fd2c176-d104-4058-9b92-db8937b2fa68","Type":"ContainerDied","Data":"a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599"} Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.453724 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4fd2c176-d104-4058-9b92-db8937b2fa68","Type":"ContainerDied","Data":"725cbd0e4def07ed88a7337b1b741da9c0059d9a1b46457635898ef61c518e02"} Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.453755 4861 scope.go:117] "RemoveContainer" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.453900 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.476406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-config-data" (OuterVolumeSpecName: "config-data") pod "4fd2c176-d104-4058-9b92-db8937b2fa68" (UID: "4fd2c176-d104-4058-9b92-db8937b2fa68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.481477 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fd2c176-d104-4058-9b92-db8937b2fa68" (UID: "4fd2c176-d104-4058-9b92-db8937b2fa68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.521852 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5r6d\" (UniqueName: \"kubernetes.io/projected/4fd2c176-d104-4058-9b92-db8937b2fa68-kube-api-access-b5r6d\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.521886 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.521895 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd2c176-d104-4058-9b92-db8937b2fa68-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.556509 4861 scope.go:117] "RemoveContainer" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" Feb 19 15:36:23 crc kubenswrapper[4861]: E0219 15:36:23.557002 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599\": container with ID starting with a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599 not found: ID does not exist" containerID="a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.557070 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599"} err="failed to get container status \"a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599\": rpc error: code = NotFound desc = could not find container \"a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599\": container with ID starting with a2177b031787bc98b3bc35f9f66446d8152e849662832fd08586343692499599 not found: ID does not exist" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.744359 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.812907 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.825385 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.839540 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:36:23 crc kubenswrapper[4861]: E0219 15:36:23.840118 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerName="nova-scheduler-scheduler" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.840135 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerName="nova-scheduler-scheduler" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.840407 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" containerName="nova-scheduler-scheduler" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.841382 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.848609 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.849505 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.930141 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95zt\" (UniqueName: \"kubernetes.io/projected/141cecd9-d39a-4ba9-99a4-cb2befa9571c-kube-api-access-m95zt\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.930715 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141cecd9-d39a-4ba9-99a4-cb2befa9571c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.930787 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141cecd9-d39a-4ba9-99a4-cb2befa9571c-config-data\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:23 crc kubenswrapper[4861]: I0219 15:36:23.988889 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd2c176-d104-4058-9b92-db8937b2fa68" path="/var/lib/kubelet/pods/4fd2c176-d104-4058-9b92-db8937b2fa68/volumes" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.033011 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95zt\" (UniqueName: \"kubernetes.io/projected/141cecd9-d39a-4ba9-99a4-cb2befa9571c-kube-api-access-m95zt\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.033108 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141cecd9-d39a-4ba9-99a4-cb2befa9571c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.033174 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141cecd9-d39a-4ba9-99a4-cb2befa9571c-config-data\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.041445 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141cecd9-d39a-4ba9-99a4-cb2befa9571c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.042679 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141cecd9-d39a-4ba9-99a4-cb2befa9571c-config-data\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.063628 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95zt\" (UniqueName: \"kubernetes.io/projected/141cecd9-d39a-4ba9-99a4-cb2befa9571c-kube-api-access-m95zt\") pod \"nova-scheduler-0\" (UID: \"141cecd9-d39a-4ba9-99a4-cb2befa9571c\") " pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.195792 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.706058 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:36:24 crc kubenswrapper[4861]: I0219 15:36:24.776789 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 15:36:25 crc kubenswrapper[4861]: I0219 15:36:25.474404 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"141cecd9-d39a-4ba9-99a4-cb2befa9571c","Type":"ContainerStarted","Data":"0867bf9c7e7f88b918ead32767d61705a3258639ac139483b0d8905f3ff9f0a2"} Feb 19 15:36:25 crc kubenswrapper[4861]: I0219 15:36:25.474716 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"141cecd9-d39a-4ba9-99a4-cb2befa9571c","Type":"ContainerStarted","Data":"9772eb2ebb57f4132f01e388b63a7fec8a618fc214bfff12936aa483cd74c42a"} Feb 19 15:36:25 crc kubenswrapper[4861]: I0219 15:36:25.492619 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.492599363 podStartE2EDuration="2.492599363s" podCreationTimestamp="2026-02-19 15:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:36:25.487975049 +0000 UTC m=+8800.149078277" watchObservedRunningTime="2026-02-19 15:36:25.492599363 +0000 UTC m=+8800.153702591" Feb 19 15:36:26 crc kubenswrapper[4861]: I0219 15:36:26.892635 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:26 crc kubenswrapper[4861]: I0219 15:36:26.892934 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:26 crc kubenswrapper[4861]: I0219 15:36:26.944956 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:27 crc kubenswrapper[4861]: I0219 15:36:27.032045 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:36:27 crc kubenswrapper[4861]: I0219 15:36:27.032458 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:36:27 crc kubenswrapper[4861]: I0219 15:36:27.573219 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:27 crc kubenswrapper[4861]: I0219 15:36:27.630455 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfnfm"] Feb 19 15:36:27 crc kubenswrapper[4861]: I0219 15:36:27.807960 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 15:36:27 crc kubenswrapper[4861]: I0219 15:36:27.808174 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 15:36:28 crc kubenswrapper[4861]: I0219 15:36:28.053658 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="34d5770e-d9f2-4b84-9700-f1205577fd3d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:36:28 crc kubenswrapper[4861]: I0219 15:36:28.053709 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="34d5770e-d9f2-4b84-9700-f1205577fd3d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:36:28 crc kubenswrapper[4861]: I0219 15:36:28.824581 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f0490c4d-3f19-4ef9-a74d-db926199c4c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:36:28 crc kubenswrapper[4861]: I0219 15:36:28.824594 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f0490c4d-3f19-4ef9-a74d-db926199c4c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:36:29 crc kubenswrapper[4861]: I0219 15:36:29.196695 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 15:36:29 crc kubenswrapper[4861]: I0219 15:36:29.524318 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hfnfm" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="registry-server" containerID="cri-o://d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11" gracePeriod=2 Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.147648 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.194025 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-catalog-content\") pod \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.194069 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-utilities\") pod \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.194260 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slsbl\" (UniqueName: \"kubernetes.io/projected/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-kube-api-access-slsbl\") pod \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\" (UID: \"012ccca4-61b0-4a0b-adc5-52a4705e3fb2\") " Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.195870 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-utilities" (OuterVolumeSpecName: "utilities") pod "012ccca4-61b0-4a0b-adc5-52a4705e3fb2" (UID: "012ccca4-61b0-4a0b-adc5-52a4705e3fb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.204532 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-kube-api-access-slsbl" (OuterVolumeSpecName: "kube-api-access-slsbl") pod "012ccca4-61b0-4a0b-adc5-52a4705e3fb2" (UID: "012ccca4-61b0-4a0b-adc5-52a4705e3fb2"). InnerVolumeSpecName "kube-api-access-slsbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.220062 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "012ccca4-61b0-4a0b-adc5-52a4705e3fb2" (UID: "012ccca4-61b0-4a0b-adc5-52a4705e3fb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.298868 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slsbl\" (UniqueName: \"kubernetes.io/projected/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-kube-api-access-slsbl\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.298929 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.298943 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/012ccca4-61b0-4a0b-adc5-52a4705e3fb2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.536874 4861 generic.go:334] "Generic (PLEG): container finished" podID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerID="d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11" exitCode=0 Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.536952 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerDied","Data":"d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11"} Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.536976 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfnfm" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.537017 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfnfm" event={"ID":"012ccca4-61b0-4a0b-adc5-52a4705e3fb2","Type":"ContainerDied","Data":"aaf95feee47755c21a5c6acce6d9c7362699ad949a40fb16fc7862b6a3776db2"} Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.537035 4861 scope.go:117] "RemoveContainer" containerID="d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.572973 4861 scope.go:117] "RemoveContainer" containerID="3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.580519 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfnfm"] Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.592679 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfnfm"] Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.613194 4861 scope.go:117] "RemoveContainer" containerID="c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.650813 4861 scope.go:117] "RemoveContainer" containerID="d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11" Feb 19 15:36:30 crc kubenswrapper[4861]: E0219 15:36:30.651679 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11\": container with ID starting with d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11 not found: ID does not exist" containerID="d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.651732 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11"} err="failed to get container status \"d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11\": rpc error: code = NotFound desc = could not find container \"d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11\": container with ID starting with d21b62b97c8524a44f854290448045f6a35a0363a4d692e31d1d170b0bf71f11 not found: ID does not exist" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.651765 4861 scope.go:117] "RemoveContainer" containerID="3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c" Feb 19 15:36:30 crc kubenswrapper[4861]: E0219 15:36:30.652352 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c\": container with ID starting with 3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c not found: ID does not exist" containerID="3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.652394 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c"} err="failed to get container status \"3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c\": rpc error: code = NotFound desc = could not find container \"3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c\": container with ID starting with 3a626576c0ba428fd8bab46992cbee4e411ae0fbe1b9bfcf4b31c4745014506c not found: ID does not exist" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.652446 4861 scope.go:117] "RemoveContainer" containerID="c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff" Feb 19 15:36:30 crc kubenswrapper[4861]: E0219 15:36:30.652760 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff\": container with ID starting with c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff not found: ID does not exist" containerID="c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff" Feb 19 15:36:30 crc kubenswrapper[4861]: I0219 15:36:30.652790 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff"} err="failed to get container status \"c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff\": rpc error: code = NotFound desc = could not find container \"c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff\": container with ID starting with c71139e8d2b5513c00e0bf2239836c2df353e40b9eb093698e564109873548ff not found: ID does not exist" Feb 19 15:36:31 crc kubenswrapper[4861]: I0219 15:36:31.996909 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" path="/var/lib/kubelet/pods/012ccca4-61b0-4a0b-adc5-52a4705e3fb2/volumes" Feb 19 15:36:34 crc kubenswrapper[4861]: I0219 15:36:34.196541 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 15:36:34 crc kubenswrapper[4861]: I0219 15:36:34.240989 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 15:36:34 crc kubenswrapper[4861]: I0219 15:36:34.635848 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.044233 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.045127 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.045257 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.056051 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.618587 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.626343 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.814110 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.814635 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.826980 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 15:36:37 crc kubenswrapper[4861]: I0219 15:36:37.836197 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.584620 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kpdmh"] Feb 19 15:36:49 crc kubenswrapper[4861]: E0219 15:36:49.586040 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="extract-content" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.586061 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="extract-content" Feb 19 15:36:49 crc kubenswrapper[4861]: E0219 15:36:49.586095 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="extract-utilities" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.586106 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="extract-utilities" Feb 19 15:36:49 crc kubenswrapper[4861]: E0219 15:36:49.586133 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="registry-server" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.586167 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="registry-server" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.586579 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="012ccca4-61b0-4a0b-adc5-52a4705e3fb2" containerName="registry-server" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.589669 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.606442 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpdmh"] Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.669244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82pt\" (UniqueName: \"kubernetes.io/projected/785291ec-7b1e-48a9-b797-5f5ec72f3f63-kube-api-access-c82pt\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.669649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-utilities\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.669744 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-catalog-content\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.771975 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-utilities\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.772034 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-catalog-content\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.772210 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c82pt\" (UniqueName: \"kubernetes.io/projected/785291ec-7b1e-48a9-b797-5f5ec72f3f63-kube-api-access-c82pt\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.772600 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-utilities\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.772860 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-catalog-content\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.796283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c82pt\" (UniqueName: \"kubernetes.io/projected/785291ec-7b1e-48a9-b797-5f5ec72f3f63-kube-api-access-c82pt\") pod \"certified-operators-kpdmh\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:49 crc kubenswrapper[4861]: I0219 15:36:49.934797 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:50 crc kubenswrapper[4861]: I0219 15:36:50.469975 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpdmh"] Feb 19 15:36:50 crc kubenswrapper[4861]: I0219 15:36:50.783008 4861 generic.go:334] "Generic (PLEG): container finished" podID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerID="2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845" exitCode=0 Feb 19 15:36:50 crc kubenswrapper[4861]: I0219 15:36:50.783060 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerDied","Data":"2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845"} Feb 19 15:36:50 crc kubenswrapper[4861]: I0219 15:36:50.783090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerStarted","Data":"bfce4dc53ac3d2cb08ee4f6cfd9f2f854736399f15456d584b6d64abbca9a517"} Feb 19 15:36:51 crc kubenswrapper[4861]: I0219 15:36:51.796505 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerStarted","Data":"f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e"} Feb 19 15:36:53 crc kubenswrapper[4861]: I0219 15:36:53.821405 4861 generic.go:334] "Generic (PLEG): container finished" podID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerID="f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e" exitCode=0 Feb 19 15:36:53 crc kubenswrapper[4861]: I0219 15:36:53.821476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerDied","Data":"f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e"} Feb 19 15:36:54 crc kubenswrapper[4861]: I0219 15:36:54.836001 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerStarted","Data":"90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0"} Feb 19 15:36:54 crc kubenswrapper[4861]: I0219 15:36:54.864600 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kpdmh" podStartSLOduration=2.444547666 podStartE2EDuration="5.86457489s" podCreationTimestamp="2026-02-19 15:36:49 +0000 UTC" firstStartedPulling="2026-02-19 15:36:50.787861007 +0000 UTC m=+8825.448964265" lastFinishedPulling="2026-02-19 15:36:54.207888221 +0000 UTC m=+8828.868991489" observedRunningTime="2026-02-19 15:36:54.861394024 +0000 UTC m=+8829.522497302" watchObservedRunningTime="2026-02-19 15:36:54.86457489 +0000 UTC m=+8829.525678128" Feb 19 15:36:59 crc kubenswrapper[4861]: I0219 15:36:59.935948 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:36:59 crc kubenswrapper[4861]: I0219 15:36:59.937657 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:37:00 crc kubenswrapper[4861]: I0219 15:37:00.000761 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:37:01 crc kubenswrapper[4861]: I0219 15:37:01.011864 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:37:01 crc kubenswrapper[4861]: I0219 15:37:01.066116 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kpdmh"] Feb 19 15:37:02 crc kubenswrapper[4861]: I0219 15:37:02.942320 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kpdmh" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="registry-server" containerID="cri-o://90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0" gracePeriod=2 Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.444547 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.544490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c82pt\" (UniqueName: \"kubernetes.io/projected/785291ec-7b1e-48a9-b797-5f5ec72f3f63-kube-api-access-c82pt\") pod \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.544651 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-utilities\") pod \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.544763 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-catalog-content\") pod \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\" (UID: \"785291ec-7b1e-48a9-b797-5f5ec72f3f63\") " Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.546450 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-utilities" (OuterVolumeSpecName: "utilities") pod "785291ec-7b1e-48a9-b797-5f5ec72f3f63" (UID: "785291ec-7b1e-48a9-b797-5f5ec72f3f63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.571475 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785291ec-7b1e-48a9-b797-5f5ec72f3f63-kube-api-access-c82pt" (OuterVolumeSpecName: "kube-api-access-c82pt") pod "785291ec-7b1e-48a9-b797-5f5ec72f3f63" (UID: "785291ec-7b1e-48a9-b797-5f5ec72f3f63"). InnerVolumeSpecName "kube-api-access-c82pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.602850 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "785291ec-7b1e-48a9-b797-5f5ec72f3f63" (UID: "785291ec-7b1e-48a9-b797-5f5ec72f3f63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.646754 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c82pt\" (UniqueName: \"kubernetes.io/projected/785291ec-7b1e-48a9-b797-5f5ec72f3f63-kube-api-access-c82pt\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.647044 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.647057 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785291ec-7b1e-48a9-b797-5f5ec72f3f63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.834052 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.834110 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.957461 4861 generic.go:334] "Generic (PLEG): container finished" podID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerID="90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0" exitCode=0 Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.957519 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpdmh" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.957520 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerDied","Data":"90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0"} Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.957711 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpdmh" event={"ID":"785291ec-7b1e-48a9-b797-5f5ec72f3f63","Type":"ContainerDied","Data":"bfce4dc53ac3d2cb08ee4f6cfd9f2f854736399f15456d584b6d64abbca9a517"} Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.957757 4861 scope.go:117] "RemoveContainer" containerID="90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0" Feb 19 15:37:03 crc kubenswrapper[4861]: I0219 15:37:03.994645 4861 scope.go:117] "RemoveContainer" containerID="f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.002251 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kpdmh"] Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.023145 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kpdmh"] Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.023962 4861 scope.go:117] "RemoveContainer" containerID="2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.089097 4861 scope.go:117] "RemoveContainer" containerID="90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0" Feb 19 15:37:04 crc kubenswrapper[4861]: E0219 15:37:04.089501 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0\": container with ID starting with 90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0 not found: ID does not exist" containerID="90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.089537 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0"} err="failed to get container status \"90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0\": rpc error: code = NotFound desc = could not find container \"90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0\": container with ID starting with 90d1795cd803c101a4e5144ef5548963b61b8b61cd32993104063b0f1bff2ae0 not found: ID does not exist" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.089566 4861 scope.go:117] "RemoveContainer" containerID="f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e" Feb 19 15:37:04 crc kubenswrapper[4861]: E0219 15:37:04.089839 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e\": container with ID starting with f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e not found: ID does not exist" containerID="f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.089928 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e"} err="failed to get container status \"f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e\": rpc error: code = NotFound desc = could not find container \"f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e\": container with ID starting with f028a35f35654e6292414ebf8bdfca6ae54d4185f52650005b4ddc1336a7b45e not found: ID does not exist" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.089970 4861 scope.go:117] "RemoveContainer" containerID="2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845" Feb 19 15:37:04 crc kubenswrapper[4861]: E0219 15:37:04.090312 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845\": container with ID starting with 2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845 not found: ID does not exist" containerID="2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845" Feb 19 15:37:04 crc kubenswrapper[4861]: I0219 15:37:04.090345 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845"} err="failed to get container status \"2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845\": rpc error: code = NotFound desc = could not find container \"2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845\": container with ID starting with 2bc669bd449ccfd1c81859ebcd9e9e1a1f4a07c0d23eb2678f8c830ea535d845 not found: ID does not exist" Feb 19 15:37:06 crc kubenswrapper[4861]: I0219 15:37:06.004377 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" path="/var/lib/kubelet/pods/785291ec-7b1e-48a9-b797-5f5ec72f3f63/volumes" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.358594 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bnm5"] Feb 19 15:37:17 crc kubenswrapper[4861]: E0219 15:37:17.359494 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="extract-utilities" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.359509 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="extract-utilities" Feb 19 15:37:17 crc kubenswrapper[4861]: E0219 15:37:17.359563 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="extract-content" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.359572 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="extract-content" Feb 19 15:37:17 crc kubenswrapper[4861]: E0219 15:37:17.359588 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="registry-server" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.359596 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="registry-server" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.359789 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="785291ec-7b1e-48a9-b797-5f5ec72f3f63" containerName="registry-server" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.361435 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.376286 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bnm5"] Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.493750 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p452g\" (UniqueName: \"kubernetes.io/projected/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-kube-api-access-p452g\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.493855 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-utilities\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.494296 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-catalog-content\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.596229 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p452g\" (UniqueName: \"kubernetes.io/projected/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-kube-api-access-p452g\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.596375 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-utilities\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.596685 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-catalog-content\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.596976 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-utilities\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.597105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-catalog-content\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.618207 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p452g\" (UniqueName: \"kubernetes.io/projected/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-kube-api-access-p452g\") pod \"community-operators-5bnm5\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:17 crc kubenswrapper[4861]: I0219 15:37:17.702658 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:18 crc kubenswrapper[4861]: I0219 15:37:18.326765 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bnm5"] Feb 19 15:37:18 crc kubenswrapper[4861]: W0219 15:37:18.327034 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2eb4f97_64f2_47b2_95dd_79a87819c6fd.slice/crio-6df81469a4f2a29b5d7f4881e9ffe3bfd3fb13861fcfa27c76a03ecbab920668 WatchSource:0}: Error finding container 6df81469a4f2a29b5d7f4881e9ffe3bfd3fb13861fcfa27c76a03ecbab920668: Status 404 returned error can't find the container with id 6df81469a4f2a29b5d7f4881e9ffe3bfd3fb13861fcfa27c76a03ecbab920668 Feb 19 15:37:18 crc kubenswrapper[4861]: E0219 15:37:18.767209 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2eb4f97_64f2_47b2_95dd_79a87819c6fd.slice/crio-f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2eb4f97_64f2_47b2_95dd_79a87819c6fd.slice/crio-conmon-f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:37:19 crc kubenswrapper[4861]: I0219 15:37:19.159733 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerID="f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8" exitCode=0 Feb 19 15:37:19 crc kubenswrapper[4861]: I0219 15:37:19.159918 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerDied","Data":"f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8"} Feb 19 15:37:19 crc kubenswrapper[4861]: I0219 15:37:19.160108 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerStarted","Data":"6df81469a4f2a29b5d7f4881e9ffe3bfd3fb13861fcfa27c76a03ecbab920668"} Feb 19 15:37:20 crc kubenswrapper[4861]: I0219 15:37:20.172742 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerStarted","Data":"8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053"} Feb 19 15:37:21 crc kubenswrapper[4861]: I0219 15:37:21.184208 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerID="8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053" exitCode=0 Feb 19 15:37:21 crc kubenswrapper[4861]: I0219 15:37:21.184333 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerDied","Data":"8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053"} Feb 19 15:37:22 crc kubenswrapper[4861]: I0219 15:37:22.196957 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerStarted","Data":"d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b"} Feb 19 15:37:22 crc kubenswrapper[4861]: I0219 15:37:22.223225 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bnm5" podStartSLOduration=2.822707256 podStartE2EDuration="5.223210142s" podCreationTimestamp="2026-02-19 15:37:17 +0000 UTC" firstStartedPulling="2026-02-19 15:37:19.163746275 +0000 UTC m=+8853.824849503" lastFinishedPulling="2026-02-19 15:37:21.564249121 +0000 UTC m=+8856.225352389" observedRunningTime="2026-02-19 15:37:22.212167806 +0000 UTC m=+8856.873271044" watchObservedRunningTime="2026-02-19 15:37:22.223210142 +0000 UTC m=+8856.884313370" Feb 19 15:37:27 crc kubenswrapper[4861]: I0219 15:37:27.703409 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:27 crc kubenswrapper[4861]: I0219 15:37:27.704159 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:27 crc kubenswrapper[4861]: I0219 15:37:27.781195 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:28 crc kubenswrapper[4861]: I0219 15:37:28.309175 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:28 crc kubenswrapper[4861]: I0219 15:37:28.391764 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bnm5"] Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.274966 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bnm5" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="registry-server" containerID="cri-o://d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b" gracePeriod=2 Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.801997 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.924122 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-utilities\") pod \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.924368 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p452g\" (UniqueName: \"kubernetes.io/projected/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-kube-api-access-p452g\") pod \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.924446 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-catalog-content\") pod \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\" (UID: \"e2eb4f97-64f2-47b2-95dd-79a87819c6fd\") " Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.925066 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-utilities" (OuterVolumeSpecName: "utilities") pod "e2eb4f97-64f2-47b2-95dd-79a87819c6fd" (UID: "e2eb4f97-64f2-47b2-95dd-79a87819c6fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:37:30 crc kubenswrapper[4861]: I0219 15:37:30.933620 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-kube-api-access-p452g" (OuterVolumeSpecName: "kube-api-access-p452g") pod "e2eb4f97-64f2-47b2-95dd-79a87819c6fd" (UID: "e2eb4f97-64f2-47b2-95dd-79a87819c6fd"). InnerVolumeSpecName "kube-api-access-p452g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:30.992853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2eb4f97-64f2-47b2-95dd-79a87819c6fd" (UID: "e2eb4f97-64f2-47b2-95dd-79a87819c6fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.027016 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.027052 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p452g\" (UniqueName: \"kubernetes.io/projected/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-kube-api-access-p452g\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.027063 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb4f97-64f2-47b2-95dd-79a87819c6fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.287915 4861 generic.go:334] "Generic (PLEG): container finished" podID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerID="d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b" exitCode=0 Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.287978 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bnm5" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.287980 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerDied","Data":"d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b"} Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.288209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bnm5" event={"ID":"e2eb4f97-64f2-47b2-95dd-79a87819c6fd","Type":"ContainerDied","Data":"6df81469a4f2a29b5d7f4881e9ffe3bfd3fb13861fcfa27c76a03ecbab920668"} Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.288259 4861 scope.go:117] "RemoveContainer" containerID="d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.312636 4861 scope.go:117] "RemoveContainer" containerID="8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.329541 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bnm5"] Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.345237 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bnm5"] Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.355681 4861 scope.go:117] "RemoveContainer" containerID="f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.387492 4861 scope.go:117] "RemoveContainer" containerID="d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b" Feb 19 15:37:31 crc kubenswrapper[4861]: E0219 15:37:31.388141 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b\": container with ID starting with d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b not found: ID does not exist" containerID="d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.388188 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b"} err="failed to get container status \"d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b\": rpc error: code = NotFound desc = could not find container \"d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b\": container with ID starting with d52a86d9f95c3ca3effd7940a300fbc0a7275d9c67ebdc5482de14c05370b01b not found: ID does not exist" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.388214 4861 scope.go:117] "RemoveContainer" containerID="8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053" Feb 19 15:37:31 crc kubenswrapper[4861]: E0219 15:37:31.388791 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053\": container with ID starting with 8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053 not found: ID does not exist" containerID="8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.388851 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053"} err="failed to get container status \"8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053\": rpc error: code = NotFound desc = could not find container \"8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053\": container with ID starting with 8754eeab6ceda2ac049923ba78c7e673f8f53d6efa5341e86d23b0090b87c053 not found: ID does not exist" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.388887 4861 scope.go:117] "RemoveContainer" containerID="f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8" Feb 19 15:37:31 crc kubenswrapper[4861]: E0219 15:37:31.389165 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8\": container with ID starting with f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8 not found: ID does not exist" containerID="f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.389190 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8"} err="failed to get container status \"f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8\": rpc error: code = NotFound desc = could not find container \"f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8\": container with ID starting with f5082a403d65d1fe92641582fe15a2e15d30c73222ddf8bc742fb67482318fd8 not found: ID does not exist" Feb 19 15:37:31 crc kubenswrapper[4861]: I0219 15:37:31.990911 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" path="/var/lib/kubelet/pods/e2eb4f97-64f2-47b2-95dd-79a87819c6fd/volumes" Feb 19 15:37:33 crc kubenswrapper[4861]: I0219 15:37:33.833716 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:37:33 crc kubenswrapper[4861]: I0219 15:37:33.834016 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:38:03 crc kubenswrapper[4861]: I0219 15:38:03.834710 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:38:03 crc kubenswrapper[4861]: I0219 15:38:03.835377 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:38:03 crc kubenswrapper[4861]: I0219 15:38:03.835447 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:38:03 crc kubenswrapper[4861]: I0219 15:38:03.836928 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:38:03 crc kubenswrapper[4861]: I0219 15:38:03.837035 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" gracePeriod=600 Feb 19 15:38:03 crc kubenswrapper[4861]: E0219 15:38:03.960259 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:38:04 crc kubenswrapper[4861]: I0219 15:38:04.658961 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" exitCode=0 Feb 19 15:38:04 crc kubenswrapper[4861]: I0219 15:38:04.659039 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83"} Feb 19 15:38:04 crc kubenswrapper[4861]: I0219 15:38:04.659689 4861 scope.go:117] "RemoveContainer" containerID="42bdaaa988e3d709e147c606ea5831a8547288d81b47645909bdc0769d2fee21" Feb 19 15:38:04 crc kubenswrapper[4861]: I0219 15:38:04.660533 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:38:04 crc kubenswrapper[4861]: E0219 15:38:04.660950 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.225948 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w6wfr"] Feb 19 15:38:05 crc kubenswrapper[4861]: E0219 15:38:05.226639 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="registry-server" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.226651 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="registry-server" Feb 19 15:38:05 crc kubenswrapper[4861]: E0219 15:38:05.226661 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="extract-content" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.226667 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="extract-content" Feb 19 15:38:05 crc kubenswrapper[4861]: E0219 15:38:05.226691 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="extract-utilities" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.226700 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="extract-utilities" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.228124 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2eb4f97-64f2-47b2-95dd-79a87819c6fd" containerName="registry-server" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.264117 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.330335 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6wfr"] Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.386699 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-catalog-content\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.387014 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldl6\" (UniqueName: \"kubernetes.io/projected/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-kube-api-access-hldl6\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.387133 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-utilities\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.489650 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-catalog-content\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.489776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldl6\" (UniqueName: \"kubernetes.io/projected/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-kube-api-access-hldl6\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.489840 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-utilities\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.490376 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-catalog-content\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.490688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-utilities\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.520082 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldl6\" (UniqueName: \"kubernetes.io/projected/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-kube-api-access-hldl6\") pod \"redhat-operators-w6wfr\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:05 crc kubenswrapper[4861]: I0219 15:38:05.598193 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:06 crc kubenswrapper[4861]: I0219 15:38:06.129797 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6wfr"] Feb 19 15:38:06 crc kubenswrapper[4861]: W0219 15:38:06.133141 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c87af9_1ac4_443f_ac4f_7cc56cf1d885.slice/crio-df9d08dc64a93bbc09ccd725aac3ae11a2b10dc526856f543835a99ded431800 WatchSource:0}: Error finding container df9d08dc64a93bbc09ccd725aac3ae11a2b10dc526856f543835a99ded431800: Status 404 returned error can't find the container with id df9d08dc64a93bbc09ccd725aac3ae11a2b10dc526856f543835a99ded431800 Feb 19 15:38:06 crc kubenswrapper[4861]: I0219 15:38:06.709509 4861 generic.go:334] "Generic (PLEG): container finished" podID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerID="8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84" exitCode=0 Feb 19 15:38:06 crc kubenswrapper[4861]: I0219 15:38:06.709561 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerDied","Data":"8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84"} Feb 19 15:38:06 crc kubenswrapper[4861]: I0219 15:38:06.709801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerStarted","Data":"df9d08dc64a93bbc09ccd725aac3ae11a2b10dc526856f543835a99ded431800"} Feb 19 15:38:07 crc kubenswrapper[4861]: I0219 15:38:07.721003 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerStarted","Data":"1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0"} Feb 19 15:38:12 crc kubenswrapper[4861]: I0219 15:38:12.777935 4861 generic.go:334] "Generic (PLEG): container finished" podID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerID="1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0" exitCode=0 Feb 19 15:38:12 crc kubenswrapper[4861]: I0219 15:38:12.778040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerDied","Data":"1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0"} Feb 19 15:38:13 crc kubenswrapper[4861]: I0219 15:38:13.796043 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerStarted","Data":"2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee"} Feb 19 15:38:15 crc kubenswrapper[4861]: I0219 15:38:15.598882 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:15 crc kubenswrapper[4861]: I0219 15:38:15.599310 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:16 crc kubenswrapper[4861]: I0219 15:38:16.673388 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w6wfr" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="registry-server" probeResult="failure" output=< Feb 19 15:38:16 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:38:16 crc kubenswrapper[4861]: > Feb 19 15:38:16 crc kubenswrapper[4861]: I0219 15:38:16.977306 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:38:16 crc kubenswrapper[4861]: E0219 15:38:16.977722 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:38:25 crc kubenswrapper[4861]: I0219 15:38:25.679646 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:25 crc kubenswrapper[4861]: I0219 15:38:25.714985 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w6wfr" podStartSLOduration=14.154450448 podStartE2EDuration="20.714969176s" podCreationTimestamp="2026-02-19 15:38:05 +0000 UTC" firstStartedPulling="2026-02-19 15:38:06.711955858 +0000 UTC m=+8901.373059086" lastFinishedPulling="2026-02-19 15:38:13.272474586 +0000 UTC m=+8907.933577814" observedRunningTime="2026-02-19 15:38:13.818945473 +0000 UTC m=+8908.480048741" watchObservedRunningTime="2026-02-19 15:38:25.714969176 +0000 UTC m=+8920.376072404" Feb 19 15:38:25 crc kubenswrapper[4861]: I0219 15:38:25.778045 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:25 crc kubenswrapper[4861]: I0219 15:38:25.933444 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6wfr"] Feb 19 15:38:26 crc kubenswrapper[4861]: I0219 15:38:26.916693 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w6wfr" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="registry-server" containerID="cri-o://2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee" gracePeriod=2 Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.389313 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.561335 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hldl6\" (UniqueName: \"kubernetes.io/projected/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-kube-api-access-hldl6\") pod \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.561399 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-utilities\") pod \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.561500 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-catalog-content\") pod \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\" (UID: \"14c87af9-1ac4-443f-ac4f-7cc56cf1d885\") " Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.564457 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-utilities" (OuterVolumeSpecName: "utilities") pod "14c87af9-1ac4-443f-ac4f-7cc56cf1d885" (UID: "14c87af9-1ac4-443f-ac4f-7cc56cf1d885"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.580869 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-kube-api-access-hldl6" (OuterVolumeSpecName: "kube-api-access-hldl6") pod "14c87af9-1ac4-443f-ac4f-7cc56cf1d885" (UID: "14c87af9-1ac4-443f-ac4f-7cc56cf1d885"). InnerVolumeSpecName "kube-api-access-hldl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.663959 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.663999 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hldl6\" (UniqueName: \"kubernetes.io/projected/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-kube-api-access-hldl6\") on node \"crc\" DevicePath \"\"" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.700088 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c87af9-1ac4-443f-ac4f-7cc56cf1d885" (UID: "14c87af9-1ac4-443f-ac4f-7cc56cf1d885"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.765851 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c87af9-1ac4-443f-ac4f-7cc56cf1d885-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.927375 4861 generic.go:334] "Generic (PLEG): container finished" podID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerID="2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee" exitCode=0 Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.927445 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerDied","Data":"2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee"} Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.927475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6wfr" event={"ID":"14c87af9-1ac4-443f-ac4f-7cc56cf1d885","Type":"ContainerDied","Data":"df9d08dc64a93bbc09ccd725aac3ae11a2b10dc526856f543835a99ded431800"} Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.927492 4861 scope.go:117] "RemoveContainer" containerID="2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.927639 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6wfr" Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.971560 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6wfr"] Feb 19 15:38:27 crc kubenswrapper[4861]: I0219 15:38:27.982115 4861 scope.go:117] "RemoveContainer" containerID="1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.010779 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w6wfr"] Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.014837 4861 scope.go:117] "RemoveContainer" containerID="8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.069735 4861 scope.go:117] "RemoveContainer" containerID="2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee" Feb 19 15:38:28 crc kubenswrapper[4861]: E0219 15:38:28.070169 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee\": container with ID starting with 2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee not found: ID does not exist" containerID="2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.070212 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee"} err="failed to get container status \"2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee\": rpc error: code = NotFound desc = could not find container \"2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee\": container with ID starting with 2f9b8bd6c4f2482c509cddeaec7e6277b01b495568b84dd5fc0477482ae292ee not found: ID does not exist" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.070239 4861 scope.go:117] "RemoveContainer" containerID="1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0" Feb 19 15:38:28 crc kubenswrapper[4861]: E0219 15:38:28.070807 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0\": container with ID starting with 1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0 not found: ID does not exist" containerID="1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.071142 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0"} err="failed to get container status \"1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0\": rpc error: code = NotFound desc = could not find container \"1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0\": container with ID starting with 1d7a3cafaf5dd5c2b0a446e16b406ed4420a215189f675ed5a2e91f57368edb0 not found: ID does not exist" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.071337 4861 scope.go:117] "RemoveContainer" containerID="8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84" Feb 19 15:38:28 crc kubenswrapper[4861]: E0219 15:38:28.074808 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84\": container with ID starting with 8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84 not found: ID does not exist" containerID="8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.074857 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84"} err="failed to get container status \"8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84\": rpc error: code = NotFound desc = could not find container \"8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84\": container with ID starting with 8745937f56620ac1c6fc5e1889ec42cbe5431d06ac555ec00bc0fd5f5266be84 not found: ID does not exist" Feb 19 15:38:28 crc kubenswrapper[4861]: I0219 15:38:28.977021 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:38:28 crc kubenswrapper[4861]: E0219 15:38:28.977591 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:38:29 crc kubenswrapper[4861]: I0219 15:38:29.996634 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" path="/var/lib/kubelet/pods/14c87af9-1ac4-443f-ac4f-7cc56cf1d885/volumes" Feb 19 15:38:43 crc kubenswrapper[4861]: I0219 15:38:43.977287 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:38:43 crc kubenswrapper[4861]: E0219 15:38:43.978223 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:38:58 crc kubenswrapper[4861]: I0219 15:38:58.977252 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:38:58 crc kubenswrapper[4861]: E0219 15:38:58.978672 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:39:10 crc kubenswrapper[4861]: I0219 15:39:10.978200 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:39:10 crc kubenswrapper[4861]: E0219 15:39:10.979485 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:39:24 crc kubenswrapper[4861]: I0219 15:39:24.976984 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:39:24 crc kubenswrapper[4861]: E0219 15:39:24.977774 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:39:32 crc kubenswrapper[4861]: I0219 15:39:32.692281 4861 generic.go:334] "Generic (PLEG): container finished" podID="1ba6ecf0-8541-46e2-b17e-46cc3491f870" containerID="b5065d23681ebada6b990844c1f7081ff013f7e87e0e6ba5b83cacfb267dc8c8" exitCode=0 Feb 19 15:39:32 crc kubenswrapper[4861]: I0219 15:39:32.692397 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" event={"ID":"1ba6ecf0-8541-46e2-b17e-46cc3491f870","Type":"ContainerDied","Data":"b5065d23681ebada6b990844c1f7081ff013f7e87e0e6ba5b83cacfb267dc8c8"} Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.171544 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324392 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-1\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324462 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cells-global-config-0\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-inventory\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324549 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-2\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324598 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/1ba6ecf0-8541-46e2-b17e-46cc3491f870-kube-api-access-9s82l\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324616 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-0\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324712 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-combined-ca-bundle\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-ssh-key-openstack-cell1\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324807 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-0\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324878 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-1\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.324898 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-3\") pod \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\" (UID: \"1ba6ecf0-8541-46e2-b17e-46cc3491f870\") " Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.343536 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.345946 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba6ecf0-8541-46e2-b17e-46cc3491f870-kube-api-access-9s82l" (OuterVolumeSpecName: "kube-api-access-9s82l") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "kube-api-access-9s82l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.356707 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.361195 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.363237 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.364252 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.364625 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.368847 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.369400 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.372562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-inventory" (OuterVolumeSpecName: "inventory") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.377094 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1ba6ecf0-8541-46e2-b17e-46cc3491f870" (UID: "1ba6ecf0-8541-46e2-b17e-46cc3491f870"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427605 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427638 4861 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427651 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427662 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427673 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427684 4861 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427694 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427706 4861 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427716 4861 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427725 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/1ba6ecf0-8541-46e2-b17e-46cc3491f870-kube-api-access-9s82l\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.427736 4861 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1ba6ecf0-8541-46e2-b17e-46cc3491f870-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.721694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" event={"ID":"1ba6ecf0-8541-46e2-b17e-46cc3491f870","Type":"ContainerDied","Data":"aeab29b3d7466d8888e3ef765712bd45ba68fed39d05085a5f6bca50f402dac6"} Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.722074 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeab29b3d7466d8888e3ef765712bd45ba68fed39d05085a5f6bca50f402dac6" Feb 19 15:39:34 crc kubenswrapper[4861]: I0219 15:39:34.721748 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn" Feb 19 15:39:39 crc kubenswrapper[4861]: I0219 15:39:39.977466 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:39:39 crc kubenswrapper[4861]: E0219 15:39:39.978452 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:39:50 crc kubenswrapper[4861]: I0219 15:39:50.976835 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:39:50 crc kubenswrapper[4861]: E0219 15:39:50.977804 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:40:04 crc kubenswrapper[4861]: I0219 15:40:04.977502 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:40:04 crc kubenswrapper[4861]: E0219 15:40:04.978597 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:40:18 crc kubenswrapper[4861]: I0219 15:40:18.978270 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:40:18 crc kubenswrapper[4861]: E0219 15:40:18.979232 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:40:33 crc kubenswrapper[4861]: I0219 15:40:33.977199 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:40:33 crc kubenswrapper[4861]: E0219 15:40:33.978098 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:40:45 crc kubenswrapper[4861]: I0219 15:40:45.980617 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:40:45 crc kubenswrapper[4861]: E0219 15:40:45.981469 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:41:00 crc kubenswrapper[4861]: I0219 15:41:00.977548 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:41:00 crc kubenswrapper[4861]: E0219 15:41:00.978314 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:41:14 crc kubenswrapper[4861]: I0219 15:41:14.977163 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:41:14 crc kubenswrapper[4861]: E0219 15:41:14.978019 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:41:25 crc kubenswrapper[4861]: I0219 15:41:25.978194 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:41:25 crc kubenswrapper[4861]: E0219 15:41:25.980251 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:41:38 crc kubenswrapper[4861]: I0219 15:41:38.977311 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:41:38 crc kubenswrapper[4861]: E0219 15:41:38.978045 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:41:47 crc kubenswrapper[4861]: E0219 15:41:47.868091 4861 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:38118->38.102.83.177:44601: write tcp 38.102.83.177:38118->38.102.83.177:44601: write: broken pipe Feb 19 15:41:52 crc kubenswrapper[4861]: I0219 15:41:52.977104 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:41:52 crc kubenswrapper[4861]: E0219 15:41:52.978170 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:42:03 crc kubenswrapper[4861]: I0219 15:42:03.977980 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:42:03 crc kubenswrapper[4861]: E0219 15:42:03.979068 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:42:17 crc kubenswrapper[4861]: I0219 15:42:17.977666 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:42:17 crc kubenswrapper[4861]: E0219 15:42:17.978687 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:42:21 crc kubenswrapper[4861]: I0219 15:42:21.280496 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 15:42:21 crc kubenswrapper[4861]: I0219 15:42:21.281088 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="fd536e7b-cca2-47da-948a-629b72856c4b" containerName="adoption" containerID="cri-o://43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241" gracePeriod=30 Feb 19 15:42:31 crc kubenswrapper[4861]: I0219 15:42:31.978147 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:42:31 crc kubenswrapper[4861]: E0219 15:42:31.979898 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:42:42 crc kubenswrapper[4861]: I0219 15:42:42.977813 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:42:42 crc kubenswrapper[4861]: E0219 15:42:42.978598 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.835289 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.859692 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gbqd\" (UniqueName: \"kubernetes.io/projected/fd536e7b-cca2-47da-948a-629b72856c4b-kube-api-access-6gbqd\") pod \"fd536e7b-cca2-47da-948a-629b72856c4b\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.860507 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") pod \"fd536e7b-cca2-47da-948a-629b72856c4b\" (UID: \"fd536e7b-cca2-47da-948a-629b72856c4b\") " Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.877886 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd536e7b-cca2-47da-948a-629b72856c4b-kube-api-access-6gbqd" (OuterVolumeSpecName: "kube-api-access-6gbqd") pod "fd536e7b-cca2-47da-948a-629b72856c4b" (UID: "fd536e7b-cca2-47da-948a-629b72856c4b"). InnerVolumeSpecName "kube-api-access-6gbqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.912582 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035" (OuterVolumeSpecName: "mariadb-data") pod "fd536e7b-cca2-47da-948a-629b72856c4b" (UID: "fd536e7b-cca2-47da-948a-629b72856c4b"). InnerVolumeSpecName "pvc-554ff08e-f895-4592-bfab-e235a569c035". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.962822 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gbqd\" (UniqueName: \"kubernetes.io/projected/fd536e7b-cca2-47da-948a-629b72856c4b-kube-api-access-6gbqd\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:51 crc kubenswrapper[4861]: I0219 15:42:51.962889 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-554ff08e-f895-4592-bfab-e235a569c035\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") on node \"crc\" " Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.004919 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.005089 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-554ff08e-f895-4592-bfab-e235a569c035" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035") on node "crc" Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.065561 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-554ff08e-f895-4592-bfab-e235a569c035\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-554ff08e-f895-4592-bfab-e235a569c035\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.160618 4861 generic.go:334] "Generic (PLEG): container finished" podID="fd536e7b-cca2-47da-948a-629b72856c4b" containerID="43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241" exitCode=137 Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.160659 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd536e7b-cca2-47da-948a-629b72856c4b","Type":"ContainerDied","Data":"43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241"} Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.160688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd536e7b-cca2-47da-948a-629b72856c4b","Type":"ContainerDied","Data":"9c327fde8a28858d15b556f58f0d59aa6185f56f64988f0d820bf88d29a499dd"} Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.160706 4861 scope.go:117] "RemoveContainer" containerID="43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241" Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.161030 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.182785 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.189170 4861 scope.go:117] "RemoveContainer" containerID="43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241" Feb 19 15:42:52 crc kubenswrapper[4861]: E0219 15:42:52.189686 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241\": container with ID starting with 43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241 not found: ID does not exist" containerID="43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241" Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.189727 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241"} err="failed to get container status \"43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241\": rpc error: code = NotFound desc = could not find container \"43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241\": container with ID starting with 43c87226c6f9ea9f69cb129d6a87d787629d9bb13ae26f934ee9bbe4f9c8f241 not found: ID does not exist" Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.192414 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.942876 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 15:42:52 crc kubenswrapper[4861]: I0219 15:42:52.943116 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="3adbde3b-2980-403c-a7c5-87b1fd3f6d85" containerName="adoption" containerID="cri-o://7ca65f2afa16e9d13bc5dc86dbd4cf77d45af9a55e0e3fe4c2f860e57b53e73c" gracePeriod=30 Feb 19 15:42:54 crc kubenswrapper[4861]: I0219 15:42:53.999837 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd536e7b-cca2-47da-948a-629b72856c4b" path="/var/lib/kubelet/pods/fd536e7b-cca2-47da-948a-629b72856c4b/volumes" Feb 19 15:42:54 crc kubenswrapper[4861]: I0219 15:42:54.978108 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:42:54 crc kubenswrapper[4861]: E0219 15:42:54.978814 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:43:08 crc kubenswrapper[4861]: I0219 15:43:08.978119 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:43:09 crc kubenswrapper[4861]: I0219 15:43:09.369168 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"6e9851b1e819468ec3c6c6e31183942518967daddbe129ad6ba1bd37f958d542"} Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.540571 4861 generic.go:334] "Generic (PLEG): container finished" podID="3adbde3b-2980-403c-a7c5-87b1fd3f6d85" containerID="7ca65f2afa16e9d13bc5dc86dbd4cf77d45af9a55e0e3fe4c2f860e57b53e73c" exitCode=137 Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.540685 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3adbde3b-2980-403c-a7c5-87b1fd3f6d85","Type":"ContainerDied","Data":"7ca65f2afa16e9d13bc5dc86dbd4cf77d45af9a55e0e3fe4c2f860e57b53e73c"} Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.541194 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3adbde3b-2980-403c-a7c5-87b1fd3f6d85","Type":"ContainerDied","Data":"d664882b56b3f19acfacd90b6a7f183cf837c4380c268cc4f733cdaa553bdb45"} Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.541216 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d664882b56b3f19acfacd90b6a7f183cf837c4380c268cc4f733cdaa553bdb45" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.623247 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.752135 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s66k\" (UniqueName: \"kubernetes.io/projected/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-kube-api-access-5s66k\") pod \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.752931 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") pod \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.753105 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-ovn-data-cert\") pod \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\" (UID: \"3adbde3b-2980-403c-a7c5-87b1fd3f6d85\") " Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.769233 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-kube-api-access-5s66k" (OuterVolumeSpecName: "kube-api-access-5s66k") pod "3adbde3b-2980-403c-a7c5-87b1fd3f6d85" (UID: "3adbde3b-2980-403c-a7c5-87b1fd3f6d85"). InnerVolumeSpecName "kube-api-access-5s66k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.770246 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "3adbde3b-2980-403c-a7c5-87b1fd3f6d85" (UID: "3adbde3b-2980-403c-a7c5-87b1fd3f6d85"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.788753 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d" (OuterVolumeSpecName: "ovn-data") pod "3adbde3b-2980-403c-a7c5-87b1fd3f6d85" (UID: "3adbde3b-2980-403c-a7c5-87b1fd3f6d85"). InnerVolumeSpecName "pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.855728 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s66k\" (UniqueName: \"kubernetes.io/projected/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-kube-api-access-5s66k\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.856166 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") on node \"crc\" " Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.856191 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3adbde3b-2980-403c-a7c5-87b1fd3f6d85-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.888496 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.888716 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d") on node "crc" Feb 19 15:43:23 crc kubenswrapper[4861]: I0219 15:43:23.958146 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b04d999-d2e3-4101-9a80-2b4d5f33e04d\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:24 crc kubenswrapper[4861]: I0219 15:43:24.556041 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 15:43:24 crc kubenswrapper[4861]: I0219 15:43:24.583702 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 15:43:24 crc kubenswrapper[4861]: I0219 15:43:24.592785 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 15:43:26 crc kubenswrapper[4861]: I0219 15:43:26.021897 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3adbde3b-2980-403c-a7c5-87b1fd3f6d85" path="/var/lib/kubelet/pods/3adbde3b-2980-403c-a7c5-87b1fd3f6d85/volumes" Feb 19 15:44:13 crc kubenswrapper[4861]: I0219 15:44:13.422984 4861 scope.go:117] "RemoveContainer" containerID="7ca65f2afa16e9d13bc5dc86dbd4cf77d45af9a55e0e3fe4c2f860e57b53e73c" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.319291 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-74ltx/must-gather-sfmrm"] Feb 19 15:44:30 crc kubenswrapper[4861]: E0219 15:44:30.320328 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd536e7b-cca2-47da-948a-629b72856c4b" containerName="adoption" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320340 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd536e7b-cca2-47da-948a-629b72856c4b" containerName="adoption" Feb 19 15:44:30 crc kubenswrapper[4861]: E0219 15:44:30.320354 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba6ecf0-8541-46e2-b17e-46cc3491f870" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320361 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba6ecf0-8541-46e2-b17e-46cc3491f870" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 15:44:30 crc kubenswrapper[4861]: E0219 15:44:30.320375 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="extract-utilities" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320381 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="extract-utilities" Feb 19 15:44:30 crc kubenswrapper[4861]: E0219 15:44:30.320394 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="registry-server" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320399 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="registry-server" Feb 19 15:44:30 crc kubenswrapper[4861]: E0219 15:44:30.320439 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3adbde3b-2980-403c-a7c5-87b1fd3f6d85" containerName="adoption" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320445 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adbde3b-2980-403c-a7c5-87b1fd3f6d85" containerName="adoption" Feb 19 15:44:30 crc kubenswrapper[4861]: E0219 15:44:30.320459 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="extract-content" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320465 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="extract-content" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320668 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd536e7b-cca2-47da-948a-629b72856c4b" containerName="adoption" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320686 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba6ecf0-8541-46e2-b17e-46cc3491f870" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320699 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3adbde3b-2980-403c-a7c5-87b1fd3f6d85" containerName="adoption" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.320719 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c87af9-1ac4-443f-ac4f-7cc56cf1d885" containerName="registry-server" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.321709 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.331240 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlv7\" (UniqueName: \"kubernetes.io/projected/8f3e4835-beac-4e72-bfc0-428f7163bd7a-kube-api-access-mrlv7\") pod \"must-gather-sfmrm\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.331304 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f3e4835-beac-4e72-bfc0-428f7163bd7a-must-gather-output\") pod \"must-gather-sfmrm\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.336704 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-74ltx"/"openshift-service-ca.crt" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.336878 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-74ltx"/"default-dockercfg-wcfpk" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.340401 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-74ltx"/"kube-root-ca.crt" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.407729 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-74ltx/must-gather-sfmrm"] Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.442209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrlv7\" (UniqueName: \"kubernetes.io/projected/8f3e4835-beac-4e72-bfc0-428f7163bd7a-kube-api-access-mrlv7\") pod \"must-gather-sfmrm\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.442270 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f3e4835-beac-4e72-bfc0-428f7163bd7a-must-gather-output\") pod \"must-gather-sfmrm\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.442778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f3e4835-beac-4e72-bfc0-428f7163bd7a-must-gather-output\") pod \"must-gather-sfmrm\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.461846 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrlv7\" (UniqueName: \"kubernetes.io/projected/8f3e4835-beac-4e72-bfc0-428f7163bd7a-kube-api-access-mrlv7\") pod \"must-gather-sfmrm\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:30 crc kubenswrapper[4861]: I0219 15:44:30.663828 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:44:31 crc kubenswrapper[4861]: I0219 15:44:31.136716 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-74ltx/must-gather-sfmrm"] Feb 19 15:44:31 crc kubenswrapper[4861]: I0219 15:44:31.140634 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:44:31 crc kubenswrapper[4861]: I0219 15:44:31.368027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/must-gather-sfmrm" event={"ID":"8f3e4835-beac-4e72-bfc0-428f7163bd7a","Type":"ContainerStarted","Data":"7d32305ac9e5fe7195608e17e5245bb702b327ad0abecc8a30c4e290a57de3b3"} Feb 19 15:44:38 crc kubenswrapper[4861]: I0219 15:44:38.448305 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/must-gather-sfmrm" event={"ID":"8f3e4835-beac-4e72-bfc0-428f7163bd7a","Type":"ContainerStarted","Data":"f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8"} Feb 19 15:44:38 crc kubenswrapper[4861]: I0219 15:44:38.448909 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/must-gather-sfmrm" event={"ID":"8f3e4835-beac-4e72-bfc0-428f7163bd7a","Type":"ContainerStarted","Data":"e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7"} Feb 19 15:44:38 crc kubenswrapper[4861]: I0219 15:44:38.467099 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-74ltx/must-gather-sfmrm" podStartSLOduration=2.043888283 podStartE2EDuration="8.46708535s" podCreationTimestamp="2026-02-19 15:44:30 +0000 UTC" firstStartedPulling="2026-02-19 15:44:31.140325035 +0000 UTC m=+9285.801428273" lastFinishedPulling="2026-02-19 15:44:37.563522092 +0000 UTC m=+9292.224625340" observedRunningTime="2026-02-19 15:44:38.461408737 +0000 UTC m=+9293.122511975" watchObservedRunningTime="2026-02-19 15:44:38.46708535 +0000 UTC m=+9293.128188578" Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.749659 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-74ltx/crc-debug-7sf8h"] Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.751382 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.879818 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8e82a97-0232-45ab-a382-dc45a4096bb2-host\") pod \"crc-debug-7sf8h\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.880227 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vj8\" (UniqueName: \"kubernetes.io/projected/b8e82a97-0232-45ab-a382-dc45a4096bb2-kube-api-access-57vj8\") pod \"crc-debug-7sf8h\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.982570 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8e82a97-0232-45ab-a382-dc45a4096bb2-host\") pod \"crc-debug-7sf8h\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.982707 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8e82a97-0232-45ab-a382-dc45a4096bb2-host\") pod \"crc-debug-7sf8h\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:41 crc kubenswrapper[4861]: I0219 15:44:41.983339 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57vj8\" (UniqueName: \"kubernetes.io/projected/b8e82a97-0232-45ab-a382-dc45a4096bb2-kube-api-access-57vj8\") pod \"crc-debug-7sf8h\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:42 crc kubenswrapper[4861]: I0219 15:44:42.006307 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57vj8\" (UniqueName: \"kubernetes.io/projected/b8e82a97-0232-45ab-a382-dc45a4096bb2-kube-api-access-57vj8\") pod \"crc-debug-7sf8h\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:42 crc kubenswrapper[4861]: I0219 15:44:42.072582 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:44:42 crc kubenswrapper[4861]: I0219 15:44:42.531470 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" event={"ID":"b8e82a97-0232-45ab-a382-dc45a4096bb2","Type":"ContainerStarted","Data":"d33ea80ba067b27aaab1e53c13bbd7045e42cf6bfca2e4a7833539c27092cb71"} Feb 19 15:44:53 crc kubenswrapper[4861]: I0219 15:44:53.648902 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" event={"ID":"b8e82a97-0232-45ab-a382-dc45a4096bb2","Type":"ContainerStarted","Data":"450e9b230ce4c49b67d3c4de9f18b97a98c2f3b2c7ee3c8e05bbc1fe4efb55f6"} Feb 19 15:44:53 crc kubenswrapper[4861]: I0219 15:44:53.669445 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" podStartSLOduration=1.8405269419999999 podStartE2EDuration="12.669413349s" podCreationTimestamp="2026-02-19 15:44:41 +0000 UTC" firstStartedPulling="2026-02-19 15:44:42.107653243 +0000 UTC m=+9296.768756471" lastFinishedPulling="2026-02-19 15:44:52.93653965 +0000 UTC m=+9307.597642878" observedRunningTime="2026-02-19 15:44:53.660515699 +0000 UTC m=+9308.321618927" watchObservedRunningTime="2026-02-19 15:44:53.669413349 +0000 UTC m=+9308.330516577" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.147687 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p"] Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.151012 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.153235 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.156192 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.158274 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p"] Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.206968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99367856-c4fe-4d92-9049-e42bd4874b23-config-volume\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.207094 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5hm\" (UniqueName: \"kubernetes.io/projected/99367856-c4fe-4d92-9049-e42bd4874b23-kube-api-access-bb5hm\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.207147 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99367856-c4fe-4d92-9049-e42bd4874b23-secret-volume\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.308830 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99367856-c4fe-4d92-9049-e42bd4874b23-config-volume\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.309030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5hm\" (UniqueName: \"kubernetes.io/projected/99367856-c4fe-4d92-9049-e42bd4874b23-kube-api-access-bb5hm\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.309117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99367856-c4fe-4d92-9049-e42bd4874b23-secret-volume\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.309652 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99367856-c4fe-4d92-9049-e42bd4874b23-config-volume\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.321288 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99367856-c4fe-4d92-9049-e42bd4874b23-secret-volume\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.328330 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5hm\" (UniqueName: \"kubernetes.io/projected/99367856-c4fe-4d92-9049-e42bd4874b23-kube-api-access-bb5hm\") pod \"collect-profiles-29525265-vk87p\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:00 crc kubenswrapper[4861]: I0219 15:45:00.484154 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:06 crc kubenswrapper[4861]: I0219 15:45:06.864456 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p"] Feb 19 15:45:07 crc kubenswrapper[4861]: I0219 15:45:07.788320 4861 generic.go:334] "Generic (PLEG): container finished" podID="99367856-c4fe-4d92-9049-e42bd4874b23" containerID="939f83163e00d9c03080a2a09343880a2bd64d35409c4aaf84dec353702df87e" exitCode=0 Feb 19 15:45:07 crc kubenswrapper[4861]: I0219 15:45:07.788507 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" event={"ID":"99367856-c4fe-4d92-9049-e42bd4874b23","Type":"ContainerDied","Data":"939f83163e00d9c03080a2a09343880a2bd64d35409c4aaf84dec353702df87e"} Feb 19 15:45:07 crc kubenswrapper[4861]: I0219 15:45:07.788836 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" event={"ID":"99367856-c4fe-4d92-9049-e42bd4874b23","Type":"ContainerStarted","Data":"ced93d077564d47abde7fde83eea4fca41379f528ec75824126eb0b6b4669879"} Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.289945 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.388874 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99367856-c4fe-4d92-9049-e42bd4874b23-config-volume\") pod \"99367856-c4fe-4d92-9049-e42bd4874b23\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.388922 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb5hm\" (UniqueName: \"kubernetes.io/projected/99367856-c4fe-4d92-9049-e42bd4874b23-kube-api-access-bb5hm\") pod \"99367856-c4fe-4d92-9049-e42bd4874b23\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.389081 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99367856-c4fe-4d92-9049-e42bd4874b23-secret-volume\") pod \"99367856-c4fe-4d92-9049-e42bd4874b23\" (UID: \"99367856-c4fe-4d92-9049-e42bd4874b23\") " Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.389649 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99367856-c4fe-4d92-9049-e42bd4874b23-config-volume" (OuterVolumeSpecName: "config-volume") pod "99367856-c4fe-4d92-9049-e42bd4874b23" (UID: "99367856-c4fe-4d92-9049-e42bd4874b23"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.400575 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99367856-c4fe-4d92-9049-e42bd4874b23-kube-api-access-bb5hm" (OuterVolumeSpecName: "kube-api-access-bb5hm") pod "99367856-c4fe-4d92-9049-e42bd4874b23" (UID: "99367856-c4fe-4d92-9049-e42bd4874b23"). InnerVolumeSpecName "kube-api-access-bb5hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.402448 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99367856-c4fe-4d92-9049-e42bd4874b23-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "99367856-c4fe-4d92-9049-e42bd4874b23" (UID: "99367856-c4fe-4d92-9049-e42bd4874b23"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.491130 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99367856-c4fe-4d92-9049-e42bd4874b23-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.491166 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb5hm\" (UniqueName: \"kubernetes.io/projected/99367856-c4fe-4d92-9049-e42bd4874b23-kube-api-access-bb5hm\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.491178 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99367856-c4fe-4d92-9049-e42bd4874b23-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.810796 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" event={"ID":"99367856-c4fe-4d92-9049-e42bd4874b23","Type":"ContainerDied","Data":"ced93d077564d47abde7fde83eea4fca41379f528ec75824126eb0b6b4669879"} Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.811130 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced93d077564d47abde7fde83eea4fca41379f528ec75824126eb0b6b4669879" Feb 19 15:45:09 crc kubenswrapper[4861]: I0219 15:45:09.810843 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-vk87p" Feb 19 15:45:10 crc kubenswrapper[4861]: I0219 15:45:10.361645 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks"] Feb 19 15:45:10 crc kubenswrapper[4861]: I0219 15:45:10.408918 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-mgnks"] Feb 19 15:45:11 crc kubenswrapper[4861]: I0219 15:45:11.996570 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d47748-45c0-444b-b34b-5e8cf3aee659" path="/var/lib/kubelet/pods/58d47748-45c0-444b-b34b-5e8cf3aee659/volumes" Feb 19 15:45:13 crc kubenswrapper[4861]: I0219 15:45:13.486788 4861 scope.go:117] "RemoveContainer" containerID="07403846a42b034adc191551f6a5d1f1d90bab596c6f8dba445f647e0af9d399" Feb 19 15:45:33 crc kubenswrapper[4861]: I0219 15:45:33.833896 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:45:33 crc kubenswrapper[4861]: I0219 15:45:33.834372 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:45:47 crc kubenswrapper[4861]: I0219 15:45:47.175619 4861 generic.go:334] "Generic (PLEG): container finished" podID="b8e82a97-0232-45ab-a382-dc45a4096bb2" containerID="450e9b230ce4c49b67d3c4de9f18b97a98c2f3b2c7ee3c8e05bbc1fe4efb55f6" exitCode=0 Feb 19 15:45:47 crc kubenswrapper[4861]: I0219 15:45:47.175688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" event={"ID":"b8e82a97-0232-45ab-a382-dc45a4096bb2","Type":"ContainerDied","Data":"450e9b230ce4c49b67d3c4de9f18b97a98c2f3b2c7ee3c8e05bbc1fe4efb55f6"} Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.302611 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.358539 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-74ltx/crc-debug-7sf8h"] Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.369131 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-74ltx/crc-debug-7sf8h"] Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.458562 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57vj8\" (UniqueName: \"kubernetes.io/projected/b8e82a97-0232-45ab-a382-dc45a4096bb2-kube-api-access-57vj8\") pod \"b8e82a97-0232-45ab-a382-dc45a4096bb2\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.458638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8e82a97-0232-45ab-a382-dc45a4096bb2-host\") pod \"b8e82a97-0232-45ab-a382-dc45a4096bb2\" (UID: \"b8e82a97-0232-45ab-a382-dc45a4096bb2\") " Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.459637 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e82a97-0232-45ab-a382-dc45a4096bb2-host" (OuterVolumeSpecName: "host") pod "b8e82a97-0232-45ab-a382-dc45a4096bb2" (UID: "b8e82a97-0232-45ab-a382-dc45a4096bb2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.468159 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e82a97-0232-45ab-a382-dc45a4096bb2-kube-api-access-57vj8" (OuterVolumeSpecName: "kube-api-access-57vj8") pod "b8e82a97-0232-45ab-a382-dc45a4096bb2" (UID: "b8e82a97-0232-45ab-a382-dc45a4096bb2"). InnerVolumeSpecName "kube-api-access-57vj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.561589 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57vj8\" (UniqueName: \"kubernetes.io/projected/b8e82a97-0232-45ab-a382-dc45a4096bb2-kube-api-access-57vj8\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:48 crc kubenswrapper[4861]: I0219 15:45:48.561895 4861 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8e82a97-0232-45ab-a382-dc45a4096bb2-host\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.205152 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33ea80ba067b27aaab1e53c13bbd7045e42cf6bfca2e4a7833539c27092cb71" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.205242 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-7sf8h" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.546081 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-74ltx/crc-debug-8hf62"] Feb 19 15:45:49 crc kubenswrapper[4861]: E0219 15:45:49.546584 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99367856-c4fe-4d92-9049-e42bd4874b23" containerName="collect-profiles" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.546600 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="99367856-c4fe-4d92-9049-e42bd4874b23" containerName="collect-profiles" Feb 19 15:45:49 crc kubenswrapper[4861]: E0219 15:45:49.546654 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e82a97-0232-45ab-a382-dc45a4096bb2" containerName="container-00" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.546663 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e82a97-0232-45ab-a382-dc45a4096bb2" containerName="container-00" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.546915 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="99367856-c4fe-4d92-9049-e42bd4874b23" containerName="collect-profiles" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.546939 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e82a97-0232-45ab-a382-dc45a4096bb2" containerName="container-00" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.547775 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.687845 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6addb587-0ea0-415d-b71b-f9665b2364cc-host\") pod \"crc-debug-8hf62\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.687966 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx95p\" (UniqueName: \"kubernetes.io/projected/6addb587-0ea0-415d-b71b-f9665b2364cc-kube-api-access-kx95p\") pod \"crc-debug-8hf62\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.790417 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx95p\" (UniqueName: \"kubernetes.io/projected/6addb587-0ea0-415d-b71b-f9665b2364cc-kube-api-access-kx95p\") pod \"crc-debug-8hf62\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.790645 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6addb587-0ea0-415d-b71b-f9665b2364cc-host\") pod \"crc-debug-8hf62\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.790775 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6addb587-0ea0-415d-b71b-f9665b2364cc-host\") pod \"crc-debug-8hf62\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.808275 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx95p\" (UniqueName: \"kubernetes.io/projected/6addb587-0ea0-415d-b71b-f9665b2364cc-kube-api-access-kx95p\") pod \"crc-debug-8hf62\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.869624 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:49 crc kubenswrapper[4861]: I0219 15:45:49.998548 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e82a97-0232-45ab-a382-dc45a4096bb2" path="/var/lib/kubelet/pods/b8e82a97-0232-45ab-a382-dc45a4096bb2/volumes" Feb 19 15:45:50 crc kubenswrapper[4861]: I0219 15:45:50.219562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-8hf62" event={"ID":"6addb587-0ea0-415d-b71b-f9665b2364cc","Type":"ContainerStarted","Data":"280cdbadfb0f3e5470cc18b37779256f75aa2fa4eb3dc25f195ea16154f1b8e5"} Feb 19 15:45:51 crc kubenswrapper[4861]: I0219 15:45:51.241919 4861 generic.go:334] "Generic (PLEG): container finished" podID="6addb587-0ea0-415d-b71b-f9665b2364cc" containerID="3460508cc29b6fe6338165adc1a92184821b64df31e76463bc19b41706d4693a" exitCode=0 Feb 19 15:45:51 crc kubenswrapper[4861]: I0219 15:45:51.241964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-8hf62" event={"ID":"6addb587-0ea0-415d-b71b-f9665b2364cc","Type":"ContainerDied","Data":"3460508cc29b6fe6338165adc1a92184821b64df31e76463bc19b41706d4693a"} Feb 19 15:45:51 crc kubenswrapper[4861]: I0219 15:45:51.761700 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-74ltx/crc-debug-8hf62"] Feb 19 15:45:51 crc kubenswrapper[4861]: I0219 15:45:51.770568 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-74ltx/crc-debug-8hf62"] Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.355519 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.450497 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx95p\" (UniqueName: \"kubernetes.io/projected/6addb587-0ea0-415d-b71b-f9665b2364cc-kube-api-access-kx95p\") pod \"6addb587-0ea0-415d-b71b-f9665b2364cc\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.450645 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6addb587-0ea0-415d-b71b-f9665b2364cc-host\") pod \"6addb587-0ea0-415d-b71b-f9665b2364cc\" (UID: \"6addb587-0ea0-415d-b71b-f9665b2364cc\") " Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.450834 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6addb587-0ea0-415d-b71b-f9665b2364cc-host" (OuterVolumeSpecName: "host") pod "6addb587-0ea0-415d-b71b-f9665b2364cc" (UID: "6addb587-0ea0-415d-b71b-f9665b2364cc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.451200 4861 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6addb587-0ea0-415d-b71b-f9665b2364cc-host\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.458407 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6addb587-0ea0-415d-b71b-f9665b2364cc-kube-api-access-kx95p" (OuterVolumeSpecName: "kube-api-access-kx95p") pod "6addb587-0ea0-415d-b71b-f9665b2364cc" (UID: "6addb587-0ea0-415d-b71b-f9665b2364cc"). InnerVolumeSpecName "kube-api-access-kx95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:45:52 crc kubenswrapper[4861]: I0219 15:45:52.553246 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx95p\" (UniqueName: \"kubernetes.io/projected/6addb587-0ea0-415d-b71b-f9665b2364cc-kube-api-access-kx95p\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.005113 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-74ltx/crc-debug-b8wl5"] Feb 19 15:45:53 crc kubenswrapper[4861]: E0219 15:45:53.006863 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6addb587-0ea0-415d-b71b-f9665b2364cc" containerName="container-00" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.006907 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6addb587-0ea0-415d-b71b-f9665b2364cc" containerName="container-00" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.007293 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6addb587-0ea0-415d-b71b-f9665b2364cc" containerName="container-00" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.008557 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.165642 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdg2\" (UniqueName: \"kubernetes.io/projected/bd6b489e-4150-4434-8028-c3287dc759ca-kube-api-access-kxdg2\") pod \"crc-debug-b8wl5\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.165747 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6b489e-4150-4434-8028-c3287dc759ca-host\") pod \"crc-debug-b8wl5\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.262942 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-8hf62" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.262960 4861 scope.go:117] "RemoveContainer" containerID="3460508cc29b6fe6338165adc1a92184821b64df31e76463bc19b41706d4693a" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.267795 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdg2\" (UniqueName: \"kubernetes.io/projected/bd6b489e-4150-4434-8028-c3287dc759ca-kube-api-access-kxdg2\") pod \"crc-debug-b8wl5\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.267895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6b489e-4150-4434-8028-c3287dc759ca-host\") pod \"crc-debug-b8wl5\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.267995 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6b489e-4150-4434-8028-c3287dc759ca-host\") pod \"crc-debug-b8wl5\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.285713 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdg2\" (UniqueName: \"kubernetes.io/projected/bd6b489e-4150-4434-8028-c3287dc759ca-kube-api-access-kxdg2\") pod \"crc-debug-b8wl5\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.326744 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:53 crc kubenswrapper[4861]: W0219 15:45:53.362019 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6b489e_4150_4434_8028_c3287dc759ca.slice/crio-37c5e081eea0ff65c5099c26ee4b23ec2038707657443c2bdba69c8409a225e2 WatchSource:0}: Error finding container 37c5e081eea0ff65c5099c26ee4b23ec2038707657443c2bdba69c8409a225e2: Status 404 returned error can't find the container with id 37c5e081eea0ff65c5099c26ee4b23ec2038707657443c2bdba69c8409a225e2 Feb 19 15:45:53 crc kubenswrapper[4861]: I0219 15:45:53.995623 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6addb587-0ea0-415d-b71b-f9665b2364cc" path="/var/lib/kubelet/pods/6addb587-0ea0-415d-b71b-f9665b2364cc/volumes" Feb 19 15:45:54 crc kubenswrapper[4861]: I0219 15:45:54.274491 4861 generic.go:334] "Generic (PLEG): container finished" podID="bd6b489e-4150-4434-8028-c3287dc759ca" containerID="e2ca354b6ac39a58bbc7b29dba54b44717bda10fb41c8355e576ffa47bbaf448" exitCode=0 Feb 19 15:45:54 crc kubenswrapper[4861]: I0219 15:45:54.274548 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-b8wl5" event={"ID":"bd6b489e-4150-4434-8028-c3287dc759ca","Type":"ContainerDied","Data":"e2ca354b6ac39a58bbc7b29dba54b44717bda10fb41c8355e576ffa47bbaf448"} Feb 19 15:45:54 crc kubenswrapper[4861]: I0219 15:45:54.274805 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/crc-debug-b8wl5" event={"ID":"bd6b489e-4150-4434-8028-c3287dc759ca","Type":"ContainerStarted","Data":"37c5e081eea0ff65c5099c26ee4b23ec2038707657443c2bdba69c8409a225e2"} Feb 19 15:45:54 crc kubenswrapper[4861]: I0219 15:45:54.320009 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-74ltx/crc-debug-b8wl5"] Feb 19 15:45:54 crc kubenswrapper[4861]: I0219 15:45:54.330067 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-74ltx/crc-debug-b8wl5"] Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.422103 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.513393 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxdg2\" (UniqueName: \"kubernetes.io/projected/bd6b489e-4150-4434-8028-c3287dc759ca-kube-api-access-kxdg2\") pod \"bd6b489e-4150-4434-8028-c3287dc759ca\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.514085 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6b489e-4150-4434-8028-c3287dc759ca-host\") pod \"bd6b489e-4150-4434-8028-c3287dc759ca\" (UID: \"bd6b489e-4150-4434-8028-c3287dc759ca\") " Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.514182 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd6b489e-4150-4434-8028-c3287dc759ca-host" (OuterVolumeSpecName: "host") pod "bd6b489e-4150-4434-8028-c3287dc759ca" (UID: "bd6b489e-4150-4434-8028-c3287dc759ca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.514828 4861 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6b489e-4150-4434-8028-c3287dc759ca-host\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.523770 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6b489e-4150-4434-8028-c3287dc759ca-kube-api-access-kxdg2" (OuterVolumeSpecName: "kube-api-access-kxdg2") pod "bd6b489e-4150-4434-8028-c3287dc759ca" (UID: "bd6b489e-4150-4434-8028-c3287dc759ca"). InnerVolumeSpecName "kube-api-access-kxdg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:45:55 crc kubenswrapper[4861]: I0219 15:45:55.617464 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxdg2\" (UniqueName: \"kubernetes.io/projected/bd6b489e-4150-4434-8028-c3287dc759ca-kube-api-access-kxdg2\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:56 crc kubenswrapper[4861]: I0219 15:45:56.026935 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6b489e-4150-4434-8028-c3287dc759ca" path="/var/lib/kubelet/pods/bd6b489e-4150-4434-8028-c3287dc759ca/volumes" Feb 19 15:45:56 crc kubenswrapper[4861]: I0219 15:45:56.295723 4861 scope.go:117] "RemoveContainer" containerID="e2ca354b6ac39a58bbc7b29dba54b44717bda10fb41c8355e576ffa47bbaf448" Feb 19 15:45:56 crc kubenswrapper[4861]: I0219 15:45:56.295770 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/crc-debug-b8wl5" Feb 19 15:46:03 crc kubenswrapper[4861]: I0219 15:46:03.834511 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:46:03 crc kubenswrapper[4861]: I0219 15:46:03.835258 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:46:33 crc kubenswrapper[4861]: I0219 15:46:33.833959 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:46:33 crc kubenswrapper[4861]: I0219 15:46:33.834636 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:46:33 crc kubenswrapper[4861]: I0219 15:46:33.834701 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:46:33 crc kubenswrapper[4861]: I0219 15:46:33.836822 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e9851b1e819468ec3c6c6e31183942518967daddbe129ad6ba1bd37f958d542"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:46:33 crc kubenswrapper[4861]: I0219 15:46:33.836924 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://6e9851b1e819468ec3c6c6e31183942518967daddbe129ad6ba1bd37f958d542" gracePeriod=600 Feb 19 15:46:34 crc kubenswrapper[4861]: I0219 15:46:34.790452 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="6e9851b1e819468ec3c6c6e31183942518967daddbe129ad6ba1bd37f958d542" exitCode=0 Feb 19 15:46:34 crc kubenswrapper[4861]: I0219 15:46:34.790555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"6e9851b1e819468ec3c6c6e31183942518967daddbe129ad6ba1bd37f958d542"} Feb 19 15:46:34 crc kubenswrapper[4861]: I0219 15:46:34.791198 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613"} Feb 19 15:46:34 crc kubenswrapper[4861]: I0219 15:46:34.791242 4861 scope.go:117] "RemoveContainer" containerID="dedc132be9003463e866947cd72bcece5b8e065cf48504d818619290c187ad83" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.294278 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zsrmp"] Feb 19 15:46:40 crc kubenswrapper[4861]: E0219 15:46:40.295304 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6b489e-4150-4434-8028-c3287dc759ca" containerName="container-00" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.295320 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6b489e-4150-4434-8028-c3287dc759ca" containerName="container-00" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.295636 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6b489e-4150-4434-8028-c3287dc759ca" containerName="container-00" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.297745 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.350284 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsrmp"] Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.454101 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zkd\" (UniqueName: \"kubernetes.io/projected/67ef47e9-fb79-4e44-aaf5-195a88eac539-kube-api-access-z4zkd\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.454254 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-utilities\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.454330 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-catalog-content\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.556535 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-utilities\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.556666 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-catalog-content\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.556872 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zkd\" (UniqueName: \"kubernetes.io/projected/67ef47e9-fb79-4e44-aaf5-195a88eac539-kube-api-access-z4zkd\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.557142 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-catalog-content\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.557175 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-utilities\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.705629 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zkd\" (UniqueName: \"kubernetes.io/projected/67ef47e9-fb79-4e44-aaf5-195a88eac539-kube-api-access-z4zkd\") pod \"redhat-marketplace-zsrmp\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:40 crc kubenswrapper[4861]: I0219 15:46:40.968705 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:41 crc kubenswrapper[4861]: I0219 15:46:41.470700 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsrmp"] Feb 19 15:46:41 crc kubenswrapper[4861]: I0219 15:46:41.891064 4861 generic.go:334] "Generic (PLEG): container finished" podID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerID="f512eb5cc3f3d740631aecee7b8bcf3fb88c4216c83ff65bebb065e2fac4ef34" exitCode=0 Feb 19 15:46:41 crc kubenswrapper[4861]: I0219 15:46:41.891124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerDied","Data":"f512eb5cc3f3d740631aecee7b8bcf3fb88c4216c83ff65bebb065e2fac4ef34"} Feb 19 15:46:41 crc kubenswrapper[4861]: I0219 15:46:41.891161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerStarted","Data":"d5b4f9af928dc981cefe44ae786ab5aedeb3ad86758f547f2831af5199921963"} Feb 19 15:46:42 crc kubenswrapper[4861]: I0219 15:46:42.905106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerStarted","Data":"cb758ad965c518e2a512f23ca9e5da24a644c077376f52fbd79fc972e897d363"} Feb 19 15:46:43 crc kubenswrapper[4861]: I0219 15:46:43.917872 4861 generic.go:334] "Generic (PLEG): container finished" podID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerID="cb758ad965c518e2a512f23ca9e5da24a644c077376f52fbd79fc972e897d363" exitCode=0 Feb 19 15:46:43 crc kubenswrapper[4861]: I0219 15:46:43.918025 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerDied","Data":"cb758ad965c518e2a512f23ca9e5da24a644c077376f52fbd79fc972e897d363"} Feb 19 15:46:44 crc kubenswrapper[4861]: I0219 15:46:44.933407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerStarted","Data":"980f27963ce030ee0c983f4181face6be9701d0f9995244723882a0a0dcba5e2"} Feb 19 15:46:44 crc kubenswrapper[4861]: I0219 15:46:44.961752 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zsrmp" podStartSLOduration=2.464690041 podStartE2EDuration="4.961732668s" podCreationTimestamp="2026-02-19 15:46:40 +0000 UTC" firstStartedPulling="2026-02-19 15:46:41.896911135 +0000 UTC m=+9416.558014403" lastFinishedPulling="2026-02-19 15:46:44.393953792 +0000 UTC m=+9419.055057030" observedRunningTime="2026-02-19 15:46:44.954475763 +0000 UTC m=+9419.615579031" watchObservedRunningTime="2026-02-19 15:46:44.961732668 +0000 UTC m=+9419.622835906" Feb 19 15:46:50 crc kubenswrapper[4861]: I0219 15:46:50.971096 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:50 crc kubenswrapper[4861]: I0219 15:46:50.971680 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:51 crc kubenswrapper[4861]: I0219 15:46:51.374605 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:51 crc kubenswrapper[4861]: I0219 15:46:51.446379 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:51 crc kubenswrapper[4861]: I0219 15:46:51.630352 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsrmp"] Feb 19 15:46:53 crc kubenswrapper[4861]: I0219 15:46:53.039253 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zsrmp" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="registry-server" containerID="cri-o://980f27963ce030ee0c983f4181face6be9701d0f9995244723882a0a0dcba5e2" gracePeriod=2 Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.052753 4861 generic.go:334] "Generic (PLEG): container finished" podID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerID="980f27963ce030ee0c983f4181face6be9701d0f9995244723882a0a0dcba5e2" exitCode=0 Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.052843 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerDied","Data":"980f27963ce030ee0c983f4181face6be9701d0f9995244723882a0a0dcba5e2"} Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.922016 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.946639 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-utilities\") pod \"67ef47e9-fb79-4e44-aaf5-195a88eac539\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.946738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-catalog-content\") pod \"67ef47e9-fb79-4e44-aaf5-195a88eac539\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.946783 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4zkd\" (UniqueName: \"kubernetes.io/projected/67ef47e9-fb79-4e44-aaf5-195a88eac539-kube-api-access-z4zkd\") pod \"67ef47e9-fb79-4e44-aaf5-195a88eac539\" (UID: \"67ef47e9-fb79-4e44-aaf5-195a88eac539\") " Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.949170 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-utilities" (OuterVolumeSpecName: "utilities") pod "67ef47e9-fb79-4e44-aaf5-195a88eac539" (UID: "67ef47e9-fb79-4e44-aaf5-195a88eac539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.954640 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ef47e9-fb79-4e44-aaf5-195a88eac539-kube-api-access-z4zkd" (OuterVolumeSpecName: "kube-api-access-z4zkd") pod "67ef47e9-fb79-4e44-aaf5-195a88eac539" (UID: "67ef47e9-fb79-4e44-aaf5-195a88eac539"). InnerVolumeSpecName "kube-api-access-z4zkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:46:54 crc kubenswrapper[4861]: I0219 15:46:54.973150 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ef47e9-fb79-4e44-aaf5-195a88eac539" (UID: "67ef47e9-fb79-4e44-aaf5-195a88eac539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.049451 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.049492 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef47e9-fb79-4e44-aaf5-195a88eac539-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.049506 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4zkd\" (UniqueName: \"kubernetes.io/projected/67ef47e9-fb79-4e44-aaf5-195a88eac539-kube-api-access-z4zkd\") on node \"crc\" DevicePath \"\"" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.066790 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsrmp" event={"ID":"67ef47e9-fb79-4e44-aaf5-195a88eac539","Type":"ContainerDied","Data":"d5b4f9af928dc981cefe44ae786ab5aedeb3ad86758f547f2831af5199921963"} Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.066875 4861 scope.go:117] "RemoveContainer" containerID="980f27963ce030ee0c983f4181face6be9701d0f9995244723882a0a0dcba5e2" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.066875 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsrmp" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.086571 4861 scope.go:117] "RemoveContainer" containerID="cb758ad965c518e2a512f23ca9e5da24a644c077376f52fbd79fc972e897d363" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.115188 4861 scope.go:117] "RemoveContainer" containerID="f512eb5cc3f3d740631aecee7b8bcf3fb88c4216c83ff65bebb065e2fac4ef34" Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.117301 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsrmp"] Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.126215 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsrmp"] Feb 19 15:46:55 crc kubenswrapper[4861]: I0219 15:46:55.993380 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" path="/var/lib/kubelet/pods/67ef47e9-fb79-4e44-aaf5-195a88eac539/volumes" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.786967 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q9rpc"] Feb 19 15:47:18 crc kubenswrapper[4861]: E0219 15:47:18.788825 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="extract-utilities" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.788860 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="extract-utilities" Feb 19 15:47:18 crc kubenswrapper[4861]: E0219 15:47:18.788950 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="registry-server" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.788968 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="registry-server" Feb 19 15:47:18 crc kubenswrapper[4861]: E0219 15:47:18.788993 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="extract-content" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.789009 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="extract-content" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.789616 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ef47e9-fb79-4e44-aaf5-195a88eac539" containerName="registry-server" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.794136 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.827685 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9rpc"] Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.932936 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-utilities\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.933086 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfhvt\" (UniqueName: \"kubernetes.io/projected/57cf349b-5f84-4bf5-93dd-973f576cd277-kube-api-access-xfhvt\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:18 crc kubenswrapper[4861]: I0219 15:47:18.933117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-catalog-content\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.034784 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-utilities\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.034958 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfhvt\" (UniqueName: \"kubernetes.io/projected/57cf349b-5f84-4bf5-93dd-973f576cd277-kube-api-access-xfhvt\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.034989 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-catalog-content\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.035414 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-catalog-content\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.035451 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-utilities\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.056001 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfhvt\" (UniqueName: \"kubernetes.io/projected/57cf349b-5f84-4bf5-93dd-973f576cd277-kube-api-access-xfhvt\") pod \"community-operators-q9rpc\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.134534 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:19 crc kubenswrapper[4861]: I0219 15:47:19.696902 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9rpc"] Feb 19 15:47:20 crc kubenswrapper[4861]: I0219 15:47:20.374219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerStarted","Data":"c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697"} Feb 19 15:47:20 crc kubenswrapper[4861]: I0219 15:47:20.374683 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerStarted","Data":"979df3aa2ecacb60a39f1753fe22184f7e8aaae5611b4ee96f4c6e424c5d561a"} Feb 19 15:47:21 crc kubenswrapper[4861]: I0219 15:47:21.402303 4861 generic.go:334] "Generic (PLEG): container finished" podID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerID="c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697" exitCode=0 Feb 19 15:47:21 crc kubenswrapper[4861]: I0219 15:47:21.403508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerDied","Data":"c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697"} Feb 19 15:47:22 crc kubenswrapper[4861]: I0219 15:47:22.420399 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerStarted","Data":"ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db"} Feb 19 15:47:24 crc kubenswrapper[4861]: I0219 15:47:24.442255 4861 generic.go:334] "Generic (PLEG): container finished" podID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerID="ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db" exitCode=0 Feb 19 15:47:24 crc kubenswrapper[4861]: I0219 15:47:24.442332 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerDied","Data":"ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db"} Feb 19 15:47:25 crc kubenswrapper[4861]: I0219 15:47:25.456665 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerStarted","Data":"fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952"} Feb 19 15:47:25 crc kubenswrapper[4861]: I0219 15:47:25.489132 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q9rpc" podStartSLOduration=4.001628562 podStartE2EDuration="7.489101253s" podCreationTimestamp="2026-02-19 15:47:18 +0000 UTC" firstStartedPulling="2026-02-19 15:47:21.40829394 +0000 UTC m=+9456.069397188" lastFinishedPulling="2026-02-19 15:47:24.895766611 +0000 UTC m=+9459.556869879" observedRunningTime="2026-02-19 15:47:25.475634702 +0000 UTC m=+9460.136738030" watchObservedRunningTime="2026-02-19 15:47:25.489101253 +0000 UTC m=+9460.150204521" Feb 19 15:47:29 crc kubenswrapper[4861]: I0219 15:47:29.134647 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:29 crc kubenswrapper[4861]: I0219 15:47:29.135154 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:29 crc kubenswrapper[4861]: I0219 15:47:29.188880 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.100333 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cbddg"] Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.104855 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.119277 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbddg"] Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.170722 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4ltn\" (UniqueName: \"kubernetes.io/projected/744eb7ba-0935-4295-a8e5-801e4454aa3a-kube-api-access-g4ltn\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.170785 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-catalog-content\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.170841 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-utilities\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.272864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4ltn\" (UniqueName: \"kubernetes.io/projected/744eb7ba-0935-4295-a8e5-801e4454aa3a-kube-api-access-g4ltn\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.272998 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-catalog-content\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.273112 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-utilities\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.273821 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-catalog-content\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.273868 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-utilities\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.504377 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4ltn\" (UniqueName: \"kubernetes.io/projected/744eb7ba-0935-4295-a8e5-801e4454aa3a-kube-api-access-g4ltn\") pod \"certified-operators-cbddg\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:31 crc kubenswrapper[4861]: I0219 15:47:31.744510 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:32 crc kubenswrapper[4861]: W0219 15:47:32.286770 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3 WatchSource:0}: Error finding container 025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3: Status 404 returned error can't find the container with id 025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3 Feb 19 15:47:32 crc kubenswrapper[4861]: I0219 15:47:32.288089 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbddg"] Feb 19 15:47:32 crc kubenswrapper[4861]: I0219 15:47:32.547555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerStarted","Data":"025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3"} Feb 19 15:47:33 crc kubenswrapper[4861]: I0219 15:47:33.566381 4861 generic.go:334] "Generic (PLEG): container finished" podID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerID="c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7" exitCode=0 Feb 19 15:47:33 crc kubenswrapper[4861]: I0219 15:47:33.566500 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerDied","Data":"c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7"} Feb 19 15:47:34 crc kubenswrapper[4861]: I0219 15:47:34.576619 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerStarted","Data":"ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b"} Feb 19 15:47:36 crc kubenswrapper[4861]: I0219 15:47:36.604648 4861 generic.go:334] "Generic (PLEG): container finished" podID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerID="ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b" exitCode=0 Feb 19 15:47:36 crc kubenswrapper[4861]: I0219 15:47:36.604715 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerDied","Data":"ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b"} Feb 19 15:47:37 crc kubenswrapper[4861]: I0219 15:47:37.621920 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerStarted","Data":"37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a"} Feb 19 15:47:37 crc kubenswrapper[4861]: I0219 15:47:37.646338 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cbddg" podStartSLOduration=3.105976515 podStartE2EDuration="6.646312555s" podCreationTimestamp="2026-02-19 15:47:31 +0000 UTC" firstStartedPulling="2026-02-19 15:47:33.569680044 +0000 UTC m=+9468.230783282" lastFinishedPulling="2026-02-19 15:47:37.110016094 +0000 UTC m=+9471.771119322" observedRunningTime="2026-02-19 15:47:37.644785014 +0000 UTC m=+9472.305888292" watchObservedRunningTime="2026-02-19 15:47:37.646312555 +0000 UTC m=+9472.307415793" Feb 19 15:47:39 crc kubenswrapper[4861]: I0219 15:47:39.191703 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:39 crc kubenswrapper[4861]: I0219 15:47:39.241744 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9rpc"] Feb 19 15:47:39 crc kubenswrapper[4861]: I0219 15:47:39.646108 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q9rpc" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="registry-server" containerID="cri-o://fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952" gracePeriod=2 Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.152758 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.319955 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfhvt\" (UniqueName: \"kubernetes.io/projected/57cf349b-5f84-4bf5-93dd-973f576cd277-kube-api-access-xfhvt\") pod \"57cf349b-5f84-4bf5-93dd-973f576cd277\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.320064 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-utilities\") pod \"57cf349b-5f84-4bf5-93dd-973f576cd277\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.320162 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-catalog-content\") pod \"57cf349b-5f84-4bf5-93dd-973f576cd277\" (UID: \"57cf349b-5f84-4bf5-93dd-973f576cd277\") " Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.320906 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-utilities" (OuterVolumeSpecName: "utilities") pod "57cf349b-5f84-4bf5-93dd-973f576cd277" (UID: "57cf349b-5f84-4bf5-93dd-973f576cd277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.330958 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57cf349b-5f84-4bf5-93dd-973f576cd277-kube-api-access-xfhvt" (OuterVolumeSpecName: "kube-api-access-xfhvt") pod "57cf349b-5f84-4bf5-93dd-973f576cd277" (UID: "57cf349b-5f84-4bf5-93dd-973f576cd277"). InnerVolumeSpecName "kube-api-access-xfhvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.378620 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57cf349b-5f84-4bf5-93dd-973f576cd277" (UID: "57cf349b-5f84-4bf5-93dd-973f576cd277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.422816 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.422858 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57cf349b-5f84-4bf5-93dd-973f576cd277-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.422868 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfhvt\" (UniqueName: \"kubernetes.io/projected/57cf349b-5f84-4bf5-93dd-973f576cd277-kube-api-access-xfhvt\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.660138 4861 generic.go:334] "Generic (PLEG): container finished" podID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerID="fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952" exitCode=0 Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.660255 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9rpc" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.660272 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerDied","Data":"fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952"} Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.661120 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9rpc" event={"ID":"57cf349b-5f84-4bf5-93dd-973f576cd277","Type":"ContainerDied","Data":"979df3aa2ecacb60a39f1753fe22184f7e8aaae5611b4ee96f4c6e424c5d561a"} Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.661151 4861 scope.go:117] "RemoveContainer" containerID="fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.693646 4861 scope.go:117] "RemoveContainer" containerID="ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.714812 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9rpc"] Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.722253 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q9rpc"] Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.762695 4861 scope.go:117] "RemoveContainer" containerID="c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.794390 4861 scope.go:117] "RemoveContainer" containerID="fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952" Feb 19 15:47:40 crc kubenswrapper[4861]: E0219 15:47:40.794901 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952\": container with ID starting with fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952 not found: ID does not exist" containerID="fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.794946 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952"} err="failed to get container status \"fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952\": rpc error: code = NotFound desc = could not find container \"fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952\": container with ID starting with fa1731efbeed12a612f3bc4717a4df4faa578f5ec5e1f4d3bdf1940842afd952 not found: ID does not exist" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.794972 4861 scope.go:117] "RemoveContainer" containerID="ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db" Feb 19 15:47:40 crc kubenswrapper[4861]: E0219 15:47:40.798658 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db\": container with ID starting with ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db not found: ID does not exist" containerID="ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.798717 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db"} err="failed to get container status \"ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db\": rpc error: code = NotFound desc = could not find container \"ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db\": container with ID starting with ff0d27eb33d1e382b72cd070e703dcc081bbf0598737078371adbf5cbb4634db not found: ID does not exist" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.798753 4861 scope.go:117] "RemoveContainer" containerID="c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697" Feb 19 15:47:40 crc kubenswrapper[4861]: E0219 15:47:40.799198 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697\": container with ID starting with c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697 not found: ID does not exist" containerID="c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697" Feb 19 15:47:40 crc kubenswrapper[4861]: I0219 15:47:40.799267 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697"} err="failed to get container status \"c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697\": rpc error: code = NotFound desc = could not find container \"c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697\": container with ID starting with c71637584d62e7b80d92af1126d6f7022a736e54166cf437863debcbabd23697 not found: ID does not exist" Feb 19 15:47:41 crc kubenswrapper[4861]: I0219 15:47:41.745871 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:41 crc kubenswrapper[4861]: I0219 15:47:41.746341 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:41 crc kubenswrapper[4861]: I0219 15:47:41.816002 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:42 crc kubenswrapper[4861]: I0219 15:47:42.003654 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" path="/var/lib/kubelet/pods/57cf349b-5f84-4bf5-93dd-973f576cd277/volumes" Feb 19 15:47:42 crc kubenswrapper[4861]: I0219 15:47:42.760915 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:43 crc kubenswrapper[4861]: I0219 15:47:43.852923 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbddg"] Feb 19 15:47:44 crc kubenswrapper[4861]: I0219 15:47:44.708408 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cbddg" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="registry-server" containerID="cri-o://37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a" gracePeriod=2 Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.718853 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.721188 4861 generic.go:334] "Generic (PLEG): container finished" podID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerID="37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a" exitCode=0 Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.721238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerDied","Data":"37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a"} Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.721274 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbddg" event={"ID":"744eb7ba-0935-4295-a8e5-801e4454aa3a","Type":"ContainerDied","Data":"025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3"} Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.721295 4861 scope.go:117] "RemoveContainer" containerID="37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.738667 4861 scope.go:117] "RemoveContainer" containerID="ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.770229 4861 scope.go:117] "RemoveContainer" containerID="c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.828863 4861 scope.go:117] "RemoveContainer" containerID="37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a" Feb 19 15:47:45 crc kubenswrapper[4861]: E0219 15:47:45.832068 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a\": container with ID starting with 37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a not found: ID does not exist" containerID="37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.832116 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a"} err="failed to get container status \"37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a\": rpc error: code = NotFound desc = could not find container \"37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a\": container with ID starting with 37220844a3c86f5c37f73535280092a499e9b6aa2683cca359b374f3de56e18a not found: ID does not exist" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.832144 4861 scope.go:117] "RemoveContainer" containerID="ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b" Feb 19 15:47:45 crc kubenswrapper[4861]: E0219 15:47:45.832735 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b\": container with ID starting with ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b not found: ID does not exist" containerID="ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.832808 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b"} err="failed to get container status \"ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b\": rpc error: code = NotFound desc = could not find container \"ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b\": container with ID starting with ac7a7f464255e2da1f030f0badeb0429d053e90cb2496e85eacd70fbf6ba563b not found: ID does not exist" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.832843 4861 scope.go:117] "RemoveContainer" containerID="c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7" Feb 19 15:47:45 crc kubenswrapper[4861]: E0219 15:47:45.833864 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7\": container with ID starting with c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7 not found: ID does not exist" containerID="c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.833890 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7"} err="failed to get container status \"c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7\": rpc error: code = NotFound desc = could not find container \"c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7\": container with ID starting with c9e874c9998ad9814a2e6f90d35057e4fa15f45a504aeae8d4c55e6e96b9bff7 not found: ID does not exist" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.862861 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4ltn\" (UniqueName: \"kubernetes.io/projected/744eb7ba-0935-4295-a8e5-801e4454aa3a-kube-api-access-g4ltn\") pod \"744eb7ba-0935-4295-a8e5-801e4454aa3a\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.862961 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-utilities\") pod \"744eb7ba-0935-4295-a8e5-801e4454aa3a\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.863090 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-catalog-content\") pod \"744eb7ba-0935-4295-a8e5-801e4454aa3a\" (UID: \"744eb7ba-0935-4295-a8e5-801e4454aa3a\") " Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.865182 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-utilities" (OuterVolumeSpecName: "utilities") pod "744eb7ba-0935-4295-a8e5-801e4454aa3a" (UID: "744eb7ba-0935-4295-a8e5-801e4454aa3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.874140 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744eb7ba-0935-4295-a8e5-801e4454aa3a-kube-api-access-g4ltn" (OuterVolumeSpecName: "kube-api-access-g4ltn") pod "744eb7ba-0935-4295-a8e5-801e4454aa3a" (UID: "744eb7ba-0935-4295-a8e5-801e4454aa3a"). InnerVolumeSpecName "kube-api-access-g4ltn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.914243 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "744eb7ba-0935-4295-a8e5-801e4454aa3a" (UID: "744eb7ba-0935-4295-a8e5-801e4454aa3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.965239 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.965279 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744eb7ba-0935-4295-a8e5-801e4454aa3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:45 crc kubenswrapper[4861]: I0219 15:47:45.965294 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4ltn\" (UniqueName: \"kubernetes.io/projected/744eb7ba-0935-4295-a8e5-801e4454aa3a-kube-api-access-g4ltn\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:46 crc kubenswrapper[4861]: E0219 15:47:46.314572 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3\": RecentStats: unable to find data in memory cache]" Feb 19 15:47:46 crc kubenswrapper[4861]: I0219 15:47:46.739020 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbddg" Feb 19 15:47:46 crc kubenswrapper[4861]: I0219 15:47:46.780804 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbddg"] Feb 19 15:47:46 crc kubenswrapper[4861]: I0219 15:47:46.792847 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cbddg"] Feb 19 15:47:48 crc kubenswrapper[4861]: I0219 15:47:48.018046 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" path="/var/lib/kubelet/pods/744eb7ba-0935-4295-a8e5-801e4454aa3a/volumes" Feb 19 15:47:56 crc kubenswrapper[4861]: E0219 15:47:56.612177 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3\": RecentStats: unable to find data in memory cache]" Feb 19 15:48:07 crc kubenswrapper[4861]: E0219 15:48:07.016640 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice\": RecentStats: unable to find data in memory cache]" Feb 19 15:48:17 crc kubenswrapper[4861]: E0219 15:48:17.383259 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice\": RecentStats: unable to find data in memory cache]" Feb 19 15:48:27 crc kubenswrapper[4861]: E0219 15:48:27.698023 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice\": RecentStats: unable to find data in memory cache]" Feb 19 15:48:37 crc kubenswrapper[4861]: E0219 15:48:37.983215 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice/crio-025f15506a3a2584c26d0d439cc10a1e67c963148b9c7841c4509ffc8c3587f3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744eb7ba_0935_4295_a8e5_801e4454aa3a.slice\": RecentStats: unable to find data in memory cache]" Feb 19 15:48:46 crc kubenswrapper[4861]: E0219 15:48:46.024964 4861 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/786118a13c71137b58975448e7a35ee9b5bbd52e038b2bf109888f9a508699bf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/786118a13c71137b58975448e7a35ee9b5bbd52e038b2bf109888f9a508699bf/diff: no such file or directory, extraDiskErr: Feb 19 15:49:03 crc kubenswrapper[4861]: I0219 15:49:03.834017 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:49:03 crc kubenswrapper[4861]: I0219 15:49:03.834681 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:49:33 crc kubenswrapper[4861]: I0219 15:49:33.833784 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:49:33 crc kubenswrapper[4861]: I0219 15:49:33.834403 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:50:03 crc kubenswrapper[4861]: I0219 15:50:03.834205 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:50:03 crc kubenswrapper[4861]: I0219 15:50:03.834878 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:50:03 crc kubenswrapper[4861]: I0219 15:50:03.834931 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:50:03 crc kubenswrapper[4861]: I0219 15:50:03.835546 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:50:03 crc kubenswrapper[4861]: I0219 15:50:03.835613 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" gracePeriod=600 Feb 19 15:50:03 crc kubenswrapper[4861]: E0219 15:50:03.963784 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:50:04 crc kubenswrapper[4861]: I0219 15:50:04.289748 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" exitCode=0 Feb 19 15:50:04 crc kubenswrapper[4861]: I0219 15:50:04.290079 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613"} Feb 19 15:50:04 crc kubenswrapper[4861]: I0219 15:50:04.290122 4861 scope.go:117] "RemoveContainer" containerID="6e9851b1e819468ec3c6c6e31183942518967daddbe129ad6ba1bd37f958d542" Feb 19 15:50:04 crc kubenswrapper[4861]: I0219 15:50:04.291019 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:50:04 crc kubenswrapper[4861]: E0219 15:50:04.291334 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:50:18 crc kubenswrapper[4861]: I0219 15:50:18.977928 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:50:18 crc kubenswrapper[4861]: E0219 15:50:18.979036 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:50:30 crc kubenswrapper[4861]: I0219 15:50:30.976658 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:50:30 crc kubenswrapper[4861]: E0219 15:50:30.977496 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:50:42 crc kubenswrapper[4861]: I0219 15:50:42.977813 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:50:42 crc kubenswrapper[4861]: E0219 15:50:42.978629 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:50:56 crc kubenswrapper[4861]: I0219 15:50:56.977322 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:50:56 crc kubenswrapper[4861]: E0219 15:50:56.978387 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:51:09 crc kubenswrapper[4861]: I0219 15:51:09.978329 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:51:09 crc kubenswrapper[4861]: E0219 15:51:09.979269 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:51:13 crc kubenswrapper[4861]: I0219 15:51:13.868516 4861 scope.go:117] "RemoveContainer" containerID="450e9b230ce4c49b67d3c4de9f18b97a98c2f3b2c7ee3c8e05bbc1fe4efb55f6" Feb 19 15:51:23 crc kubenswrapper[4861]: I0219 15:51:23.982980 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:51:23 crc kubenswrapper[4861]: E0219 15:51:23.983825 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:51:38 crc kubenswrapper[4861]: I0219 15:51:38.978089 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:51:38 crc kubenswrapper[4861]: E0219 15:51:38.979642 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:51:50 crc kubenswrapper[4861]: I0219 15:51:50.977476 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:51:50 crc kubenswrapper[4861]: E0219 15:51:50.978396 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.741011 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87l8m"] Feb 19 15:51:53 crc kubenswrapper[4861]: E0219 15:51:53.742904 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="extract-content" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.742943 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="extract-content" Feb 19 15:51:53 crc kubenswrapper[4861]: E0219 15:51:53.742967 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="extract-utilities" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.742980 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="extract-utilities" Feb 19 15:51:53 crc kubenswrapper[4861]: E0219 15:51:53.743085 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="extract-content" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.743108 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="extract-content" Feb 19 15:51:53 crc kubenswrapper[4861]: E0219 15:51:53.743202 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="registry-server" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.743229 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="registry-server" Feb 19 15:51:53 crc kubenswrapper[4861]: E0219 15:51:53.743287 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="extract-utilities" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.743305 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="extract-utilities" Feb 19 15:51:53 crc kubenswrapper[4861]: E0219 15:51:53.743326 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="registry-server" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.743339 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="registry-server" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.743831 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cf349b-5f84-4bf5-93dd-973f576cd277" containerName="registry-server" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.743885 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="744eb7ba-0935-4295-a8e5-801e4454aa3a" containerName="registry-server" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.748102 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.757321 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87l8m"] Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.859679 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-utilities\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.860179 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5h8s\" (UniqueName: \"kubernetes.io/projected/17d1d90e-7317-47bf-8e68-afc8718a2318-kube-api-access-z5h8s\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.860216 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-catalog-content\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.962550 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-utilities\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.962814 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5h8s\" (UniqueName: \"kubernetes.io/projected/17d1d90e-7317-47bf-8e68-afc8718a2318-kube-api-access-z5h8s\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.962867 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-catalog-content\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.963244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-utilities\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.963499 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-catalog-content\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:53 crc kubenswrapper[4861]: I0219 15:51:53.987697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5h8s\" (UniqueName: \"kubernetes.io/projected/17d1d90e-7317-47bf-8e68-afc8718a2318-kube-api-access-z5h8s\") pod \"redhat-operators-87l8m\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:54 crc kubenswrapper[4861]: I0219 15:51:54.093254 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:51:54 crc kubenswrapper[4861]: I0219 15:51:54.578787 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87l8m"] Feb 19 15:51:54 crc kubenswrapper[4861]: I0219 15:51:54.629165 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerStarted","Data":"7703bc94223ca251b7b017413235fd167239bfc9398ba9ee9b7a331fd1d2c656"} Feb 19 15:51:55 crc kubenswrapper[4861]: I0219 15:51:55.642256 4861 generic.go:334] "Generic (PLEG): container finished" podID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerID="f6c416109adb35b7a6797ce1f08037c1849e56f803dee65dd14b8cdc552d3ecd" exitCode=0 Feb 19 15:51:55 crc kubenswrapper[4861]: I0219 15:51:55.642322 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerDied","Data":"f6c416109adb35b7a6797ce1f08037c1849e56f803dee65dd14b8cdc552d3ecd"} Feb 19 15:51:55 crc kubenswrapper[4861]: I0219 15:51:55.646889 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:51:57 crc kubenswrapper[4861]: I0219 15:51:57.667744 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerStarted","Data":"e03845ddcfaefeb62c017bf91c795aea48c93ae7781241b20ee82bf865130c65"} Feb 19 15:52:01 crc kubenswrapper[4861]: I0219 15:52:01.737188 4861 generic.go:334] "Generic (PLEG): container finished" podID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerID="e03845ddcfaefeb62c017bf91c795aea48c93ae7781241b20ee82bf865130c65" exitCode=0 Feb 19 15:52:01 crc kubenswrapper[4861]: I0219 15:52:01.737336 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerDied","Data":"e03845ddcfaefeb62c017bf91c795aea48c93ae7781241b20ee82bf865130c65"} Feb 19 15:52:02 crc kubenswrapper[4861]: I0219 15:52:02.750652 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerStarted","Data":"a27df46db6d2bd09c11629b60654b6308b372ba417b9783ae7bbc8e38f3a6214"} Feb 19 15:52:02 crc kubenswrapper[4861]: I0219 15:52:02.785952 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87l8m" podStartSLOduration=3.188054949 podStartE2EDuration="9.785919157s" podCreationTimestamp="2026-02-19 15:51:53 +0000 UTC" firstStartedPulling="2026-02-19 15:51:55.646441446 +0000 UTC m=+9730.307544674" lastFinishedPulling="2026-02-19 15:52:02.244305644 +0000 UTC m=+9736.905408882" observedRunningTime="2026-02-19 15:52:02.776787892 +0000 UTC m=+9737.437891130" watchObservedRunningTime="2026-02-19 15:52:02.785919157 +0000 UTC m=+9737.447022435" Feb 19 15:52:03 crc kubenswrapper[4861]: I0219 15:52:03.978491 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:52:03 crc kubenswrapper[4861]: E0219 15:52:03.978726 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:52:04 crc kubenswrapper[4861]: I0219 15:52:04.093997 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:52:04 crc kubenswrapper[4861]: I0219 15:52:04.094066 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:52:05 crc kubenswrapper[4861]: I0219 15:52:05.155388 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-87l8m" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="registry-server" probeResult="failure" output=< Feb 19 15:52:05 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Feb 19 15:52:05 crc kubenswrapper[4861]: > Feb 19 15:52:14 crc kubenswrapper[4861]: I0219 15:52:14.504986 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:52:14 crc kubenswrapper[4861]: I0219 15:52:14.577740 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:52:14 crc kubenswrapper[4861]: I0219 15:52:14.748000 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87l8m"] Feb 19 15:52:15 crc kubenswrapper[4861]: I0219 15:52:15.911146 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87l8m" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="registry-server" containerID="cri-o://a27df46db6d2bd09c11629b60654b6308b372ba417b9783ae7bbc8e38f3a6214" gracePeriod=2 Feb 19 15:52:16 crc kubenswrapper[4861]: I0219 15:52:16.922240 4861 generic.go:334] "Generic (PLEG): container finished" podID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerID="a27df46db6d2bd09c11629b60654b6308b372ba417b9783ae7bbc8e38f3a6214" exitCode=0 Feb 19 15:52:16 crc kubenswrapper[4861]: I0219 15:52:16.922851 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerDied","Data":"a27df46db6d2bd09c11629b60654b6308b372ba417b9783ae7bbc8e38f3a6214"} Feb 19 15:52:16 crc kubenswrapper[4861]: I0219 15:52:16.922889 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87l8m" event={"ID":"17d1d90e-7317-47bf-8e68-afc8718a2318","Type":"ContainerDied","Data":"7703bc94223ca251b7b017413235fd167239bfc9398ba9ee9b7a331fd1d2c656"} Feb 19 15:52:16 crc kubenswrapper[4861]: I0219 15:52:16.922910 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7703bc94223ca251b7b017413235fd167239bfc9398ba9ee9b7a331fd1d2c656" Feb 19 15:52:16 crc kubenswrapper[4861]: I0219 15:52:16.987409 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.097963 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5h8s\" (UniqueName: \"kubernetes.io/projected/17d1d90e-7317-47bf-8e68-afc8718a2318-kube-api-access-z5h8s\") pod \"17d1d90e-7317-47bf-8e68-afc8718a2318\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.098047 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-catalog-content\") pod \"17d1d90e-7317-47bf-8e68-afc8718a2318\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.098185 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-utilities\") pod \"17d1d90e-7317-47bf-8e68-afc8718a2318\" (UID: \"17d1d90e-7317-47bf-8e68-afc8718a2318\") " Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.099406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-utilities" (OuterVolumeSpecName: "utilities") pod "17d1d90e-7317-47bf-8e68-afc8718a2318" (UID: "17d1d90e-7317-47bf-8e68-afc8718a2318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.105162 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d1d90e-7317-47bf-8e68-afc8718a2318-kube-api-access-z5h8s" (OuterVolumeSpecName: "kube-api-access-z5h8s") pod "17d1d90e-7317-47bf-8e68-afc8718a2318" (UID: "17d1d90e-7317-47bf-8e68-afc8718a2318"). InnerVolumeSpecName "kube-api-access-z5h8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.201213 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5h8s\" (UniqueName: \"kubernetes.io/projected/17d1d90e-7317-47bf-8e68-afc8718a2318-kube-api-access-z5h8s\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.201249 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.243144 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17d1d90e-7317-47bf-8e68-afc8718a2318" (UID: "17d1d90e-7317-47bf-8e68-afc8718a2318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.304128 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17d1d90e-7317-47bf-8e68-afc8718a2318-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.932443 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87l8m" Feb 19 15:52:17 crc kubenswrapper[4861]: I0219 15:52:17.977867 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:52:17 crc kubenswrapper[4861]: E0219 15:52:17.978270 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:52:18 crc kubenswrapper[4861]: I0219 15:52:18.007697 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87l8m"] Feb 19 15:52:18 crc kubenswrapper[4861]: I0219 15:52:18.007751 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87l8m"] Feb 19 15:52:19 crc kubenswrapper[4861]: I0219 15:52:19.989719 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" path="/var/lib/kubelet/pods/17d1d90e-7317-47bf-8e68-afc8718a2318/volumes" Feb 19 15:52:30 crc kubenswrapper[4861]: I0219 15:52:30.977968 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:52:30 crc kubenswrapper[4861]: E0219 15:52:30.979183 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:52:43 crc kubenswrapper[4861]: I0219 15:52:43.978719 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:52:43 crc kubenswrapper[4861]: E0219 15:52:43.979725 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:52:54 crc kubenswrapper[4861]: I0219 15:52:54.977342 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:52:54 crc kubenswrapper[4861]: E0219 15:52:54.978794 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:53:07 crc kubenswrapper[4861]: I0219 15:53:07.977837 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:53:07 crc kubenswrapper[4861]: E0219 15:53:07.979169 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:53:21 crc kubenswrapper[4861]: I0219 15:53:21.978708 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:53:21 crc kubenswrapper[4861]: E0219 15:53:21.979757 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:53:36 crc kubenswrapper[4861]: I0219 15:53:36.977798 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:53:36 crc kubenswrapper[4861]: E0219 15:53:36.978798 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:53:47 crc kubenswrapper[4861]: I0219 15:53:47.978040 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:53:47 crc kubenswrapper[4861]: E0219 15:53:47.979122 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:54:00 crc kubenswrapper[4861]: I0219 15:54:00.978552 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:54:00 crc kubenswrapper[4861]: E0219 15:54:00.979628 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:54:12 crc kubenswrapper[4861]: I0219 15:54:12.977196 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:54:12 crc kubenswrapper[4861]: E0219 15:54:12.978010 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:54:26 crc kubenswrapper[4861]: I0219 15:54:26.978470 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:54:26 crc kubenswrapper[4861]: E0219 15:54:26.979696 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.197745 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7d8a2ddc-a471-4e7e-8e9a-fc205b80a904/init-config-reloader/0.log" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.633818 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7d8a2ddc-a471-4e7e-8e9a-fc205b80a904/init-config-reloader/0.log" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.676056 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7d8a2ddc-a471-4e7e-8e9a-fc205b80a904/config-reloader/0.log" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.683948 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7d8a2ddc-a471-4e7e-8e9a-fc205b80a904/alertmanager/0.log" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.843159 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_30f6475f-97a7-464d-9aed-7949f9ae6d45/aodh-api/0.log" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.925576 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_30f6475f-97a7-464d-9aed-7949f9ae6d45/aodh-evaluator/0.log" Feb 19 15:54:30 crc kubenswrapper[4861]: I0219 15:54:30.973049 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_30f6475f-97a7-464d-9aed-7949f9ae6d45/aodh-listener/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.058775 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_30f6475f-97a7-464d-9aed-7949f9ae6d45/aodh-notifier/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.181675 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c6db896c8-9k2wc_f0f381bb-c798-4ace-a1d6-97da2274a601/barbican-api-log/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.193407 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c6db896c8-9k2wc_f0f381bb-c798-4ace-a1d6-97da2274a601/barbican-api/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.390800 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f7b4998cd-bqbdt_054d3fa9-6f6e-4d14-8759-b626a8ff268b/barbican-keystone-listener-log/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.432905 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f7b4998cd-bqbdt_054d3fa9-6f6e-4d14-8759-b626a8ff268b/barbican-keystone-listener/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.557703 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7ff999c95-tks6q_caeb0702-4acd-43f7-bb98-659931b75efa/barbican-worker/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.588272 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7ff999c95-tks6q_caeb0702-4acd-43f7-bb98-659931b75efa/barbican-worker-log/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.745722 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-x6tgq_3c2c0121-c7dc-4ae9-b3e0-c1c1cc5c8808/bootstrap-openstack-openstack-cell1/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.858955 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c2df18c-c07d-43f7-9cef-373ea36b3c27/ceilometer-central-agent/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.959177 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c2df18c-c07d-43f7-9cef-373ea36b3c27/proxy-httpd/0.log" Feb 19 15:54:31 crc kubenswrapper[4861]: I0219 15:54:31.964019 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c2df18c-c07d-43f7-9cef-373ea36b3c27/ceilometer-notification-agent/0.log" Feb 19 15:54:32 crc kubenswrapper[4861]: I0219 15:54:32.087618 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c2df18c-c07d-43f7-9cef-373ea36b3c27/sg-core/0.log" Feb 19 15:54:32 crc kubenswrapper[4861]: I0219 15:54:32.196339 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_27466fa7-89f2-400e-9baa-0f05e1450feb/cinder-api-log/0.log" Feb 19 15:54:32 crc kubenswrapper[4861]: I0219 15:54:32.266278 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_27466fa7-89f2-400e-9baa-0f05e1450feb/cinder-api/0.log" Feb 19 15:54:32 crc kubenswrapper[4861]: I0219 15:54:32.446028 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_64c54201-87f2-4db9-8ce9-e1023ac576b1/cinder-scheduler/0.log" Feb 19 15:54:32 crc kubenswrapper[4861]: I0219 15:54:32.489490 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_64c54201-87f2-4db9-8ce9-e1023ac576b1/probe/0.log" Feb 19 15:54:33 crc kubenswrapper[4861]: I0219 15:54:33.229649 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-vmdpv_b744679b-533f-4182-8f10-1b0160eda028/configure-network-openstack-openstack-cell1/0.log" Feb 19 15:54:33 crc kubenswrapper[4861]: I0219 15:54:33.499089 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6fccdd6f49-vt4bh_7750e32e-cbbc-44ea-85ca-d3df49562c97/init/0.log" Feb 19 15:54:33 crc kubenswrapper[4861]: I0219 15:54:33.518530 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-cgfm6_386cc4aa-29ac-48cf-9b15-5a3587a6245e/configure-os-openstack-openstack-cell1/0.log" Feb 19 15:54:33 crc kubenswrapper[4861]: I0219 15:54:33.699087 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6fccdd6f49-vt4bh_7750e32e-cbbc-44ea-85ca-d3df49562c97/init/0.log" Feb 19 15:54:33 crc kubenswrapper[4861]: I0219 15:54:33.745891 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6fccdd6f49-vt4bh_7750e32e-cbbc-44ea-85ca-d3df49562c97/dnsmasq-dns/0.log" Feb 19 15:54:33 crc kubenswrapper[4861]: I0219 15:54:33.785586 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-ntcpr_634c282a-1ccd-4f31-a944-704e2cafa09a/download-cache-openstack-openstack-cell1/0.log" Feb 19 15:54:34 crc kubenswrapper[4861]: I0219 15:54:34.157014 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8237d625-40a8-4b4c-a789-7ca255e19437/glance-httpd/0.log" Feb 19 15:54:34 crc kubenswrapper[4861]: I0219 15:54:34.180164 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8237d625-40a8-4b4c-a789-7ca255e19437/glance-log/0.log" Feb 19 15:54:34 crc kubenswrapper[4861]: I0219 15:54:34.249026 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c41b1099-5c94-4231-8dad-7204d5078381/glance-httpd/0.log" Feb 19 15:54:34 crc kubenswrapper[4861]: I0219 15:54:34.342087 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c41b1099-5c94-4231-8dad-7204d5078381/glance-log/0.log" Feb 19 15:54:34 crc kubenswrapper[4861]: I0219 15:54:34.919318 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-578855784b-hjvjg_6353b71b-766e-410d-bd79-bc9e820919ae/heat-api/0.log" Feb 19 15:54:34 crc kubenswrapper[4861]: I0219 15:54:34.961371 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-8689fd4cf7-qjh56_5e9b28ad-f37a-4283-a2ed-3f04683ffc4f/heat-cfnapi/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.088321 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-59b7698fb8-spssh_e838a32a-b5f5-4ecf-9d2a-7280761b6ee8/heat-engine/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.257568 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-84f6b969fb-qdjxp_905a926a-0635-4bd4-8746-ecacd708ef8a/horizon/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.463720 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-skflh_9e32920d-251a-4eea-9ef5-db5f4aad9ecd/install-certs-openstack-openstack-cell1/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.497689 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-pbmtw_158f5f29-6b5f-43dc-a5a3-12353999439f/install-os-openstack-openstack-cell1/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.588601 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-84f6b969fb-qdjxp_905a926a-0635-4bd4-8746-ecacd708ef8a/horizon-log/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.797620 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29525221-cbp6w_3c62bca6-3d72-418c-97ef-6ac12c9bbd52/keystone-cron/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.914905 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-795578fb95-dhqzr_b24d56bd-e82b-4921-944e-d77ffda92dbf/keystone-api/0.log" Feb 19 15:54:35 crc kubenswrapper[4861]: I0219 15:54:35.964773 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_910c8f52-335a-40bf-b5d5-ae475656c55b/kube-state-metrics/0.log" Feb 19 15:54:36 crc kubenswrapper[4861]: I0219 15:54:36.046432 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-klvns_03e9e117-86ff-40f4-97c2-bbb611cd3cd9/libvirt-openstack-openstack-cell1/0.log" Feb 19 15:54:36 crc kubenswrapper[4861]: I0219 15:54:36.372647 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d5dc9cd8f-tqndz_1432e96f-0b8e-465e-b7dc-a70f5dd0b010/neutron-httpd/0.log" Feb 19 15:54:36 crc kubenswrapper[4861]: I0219 15:54:36.416302 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d5dc9cd8f-tqndz_1432e96f-0b8e-465e-b7dc-a70f5dd0b010/neutron-api/0.log" Feb 19 15:54:36 crc kubenswrapper[4861]: I0219 15:54:36.455983 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-bs9v4_a87fc82d-f819-4b14-8508-6ce6dae8eda5/neutron-dhcp-openstack-openstack-cell1/0.log" Feb 19 15:54:36 crc kubenswrapper[4861]: I0219 15:54:36.690259 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-xcjzx_da08be56-9fc3-4723-afec-e824bcea0208/neutron-sriov-openstack-openstack-cell1/0.log" Feb 19 15:54:36 crc kubenswrapper[4861]: I0219 15:54:36.734688 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-xzqsm_7c70f2cb-04ad-4844-ac5b-262e93aca1e7/neutron-metadata-openstack-openstack-cell1/0.log" Feb 19 15:54:37 crc kubenswrapper[4861]: I0219 15:54:37.134220 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_34d5770e-d9f2-4b84-9700-f1205577fd3d/nova-api-log/0.log" Feb 19 15:54:37 crc kubenswrapper[4861]: I0219 15:54:37.333326 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_34d5770e-d9f2-4b84-9700-f1205577fd3d/nova-api-api/0.log" Feb 19 15:54:37 crc kubenswrapper[4861]: I0219 15:54:37.729806 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_19397b58-b94c-4cbb-9026-e994604290e0/nova-cell1-conductor-conductor/0.log" Feb 19 15:54:37 crc kubenswrapper[4861]: I0219 15:54:37.732475 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7253b71a-0900-4e57-8d58-4e935ace7b4a/nova-cell0-conductor-conductor/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.005743 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7da7a051-88c9-40df-b46c-7c0e1cf651a4/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.033942 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellm67qn_1ba6ecf0-8541-46e2-b17e-46cc3491f870/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.259081 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-vghd2_00127d70-73bb-4b8e-8268-a6f858a14e41/nova-cell1-openstack-openstack-cell1/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.409576 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f0490c4d-3f19-4ef9-a74d-db926199c4c2/nova-metadata-log/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.707352 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_141cecd9-d39a-4ba9-99a4-cb2befa9571c/nova-scheduler-scheduler/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.778525 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-854dbc447d-kqb5w_1d5b4349-480d-4409-a53a-b7a41ed25ea6/init/0.log" Feb 19 15:54:38 crc kubenswrapper[4861]: I0219 15:54:38.960554 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-854dbc447d-kqb5w_1d5b4349-480d-4409-a53a-b7a41ed25ea6/init/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.013411 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f0490c4d-3f19-4ef9-a74d-db926199c4c2/nova-metadata-metadata/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.050264 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-854dbc447d-kqb5w_1d5b4349-480d-4409-a53a-b7a41ed25ea6/octavia-api-provider-agent/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.263282 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-854dbc447d-kqb5w_1d5b4349-480d-4409-a53a-b7a41ed25ea6/octavia-api/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.309117 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-4sdp8_1fe20438-68f6-481f-9699-d752ab537d28/init/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.462913 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-4sdp8_1fe20438-68f6-481f-9699-d752ab537d28/init/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.543271 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-v98v6_f6730e03-5e73-4026-865a-c2ca618f8cd4/init/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.550400 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-4sdp8_1fe20438-68f6-481f-9699-d752ab537d28/octavia-healthmanager/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.883724 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-lctzt_9836a85c-3b94-4737-97bd-8e16c62a23fa/init/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.919092 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-v98v6_f6730e03-5e73-4026-865a-c2ca618f8cd4/init/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.941493 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-v98v6_f6730e03-5e73-4026-865a-c2ca618f8cd4/octavia-housekeeping/0.log" Feb 19 15:54:39 crc kubenswrapper[4861]: I0219 15:54:39.977519 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:54:39 crc kubenswrapper[4861]: E0219 15:54:39.977736 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.121974 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-lctzt_9836a85c-3b94-4737-97bd-8e16c62a23fa/init/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.170732 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-tvz2m_43c139b2-075e-4659-af80-1a7e414a7d8c/init/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.204827 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-lctzt_9836a85c-3b94-4737-97bd-8e16c62a23fa/octavia-amphora-httpd/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.351827 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-tvz2m_43c139b2-075e-4659-af80-1a7e414a7d8c/init/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.398279 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vjbx2_cdcf2056-7111-48d8-a8b7-e5901babe337/init/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.466037 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-tvz2m_43c139b2-075e-4659-af80-1a7e414a7d8c/octavia-rsyslog/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.698701 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vjbx2_cdcf2056-7111-48d8-a8b7-e5901babe337/init/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.854203 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_083aff08-66c5-4e71-8f1e-6fabd56dab6c/mysql-bootstrap/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.874962 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vjbx2_cdcf2056-7111-48d8-a8b7-e5901babe337/octavia-worker/0.log" Feb 19 15:54:40 crc kubenswrapper[4861]: I0219 15:54:40.944601 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_083aff08-66c5-4e71-8f1e-6fabd56dab6c/mysql-bootstrap/0.log" Feb 19 15:54:41 crc kubenswrapper[4861]: I0219 15:54:41.121628 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_083aff08-66c5-4e71-8f1e-6fabd56dab6c/galera/0.log" Feb 19 15:54:41 crc kubenswrapper[4861]: I0219 15:54:41.359412 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_389dd318-a4b4-4dfc-b307-d6790470f8a1/mysql-bootstrap/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.101160 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_389dd318-a4b4-4dfc-b307-d6790470f8a1/galera/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.142274 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_389dd318-a4b4-4dfc-b307-d6790470f8a1/mysql-bootstrap/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.169832 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_07ea9cf9-b070-4dec-9a77-fd51656875d4/openstackclient/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.385609 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-868md_9636860b-6fb2-481c-a56b-ce7b093cb8b7/openstack-network-exporter/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.452678 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9h8bn_164388df-68b4-442e-bb1a-a0f27173cc13/ovsdb-server-init/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.739202 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9h8bn_164388df-68b4-442e-bb1a-a0f27173cc13/ovsdb-server/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.752472 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9h8bn_164388df-68b4-442e-bb1a-a0f27173cc13/ovsdb-server-init/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.754161 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9h8bn_164388df-68b4-442e-bb1a-a0f27173cc13/ovs-vswitchd/0.log" Feb 19 15:54:42 crc kubenswrapper[4861]: I0219 15:54:42.961999 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qqdlk_bfd0003e-b24e-49ad-ac09-5426edb96b7f/ovn-controller/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.016220 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f2aaf2d3-6d27-4808-88b7-fd79f1361924/openstack-network-exporter/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.083457 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f2aaf2d3-6d27-4808-88b7-fd79f1361924/ovn-northd/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.261999 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-7ptwt_a30771c6-fe90-4bb6-b97f-f7f2df485087/ovn-openstack-openstack-cell1/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.352050 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4c0421f7-ce54-4c46-9c05-c82787e349cf/openstack-network-exporter/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.476382 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4c0421f7-ce54-4c46-9c05-c82787e349cf/ovsdbserver-nb/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.533110 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8a7303b9-35dd-4227-b008-5f0882fcb06d/openstack-network-exporter/0.log" Feb 19 15:54:43 crc kubenswrapper[4861]: I0219 15:54:43.564835 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8a7303b9-35dd-4227-b008-5f0882fcb06d/ovsdbserver-nb/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.187332 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c6508e9c-43bc-47c5-b3ba-44ee12181b5e/ovsdbserver-nb/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.189038 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c6508e9c-43bc-47c5-b3ba-44ee12181b5e/openstack-network-exporter/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.387352 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_78d3d32d-8837-453a-8dcd-4cf0451fdb6f/ovsdbserver-sb/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.486729 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_78d3d32d-8837-453a-8dcd-4cf0451fdb6f/openstack-network-exporter/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.564219 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_c2ea5f78-24ed-484c-94fb-de46d8dfdb09/openstack-network-exporter/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.580618 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_c2ea5f78-24ed-484c-94fb-de46d8dfdb09/ovsdbserver-sb/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.685913 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_a7e74a40-f854-4731-aacc-38bc3949ce38/openstack-network-exporter/0.log" Feb 19 15:54:44 crc kubenswrapper[4861]: I0219 15:54:44.782818 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_a7e74a40-f854-4731-aacc-38bc3949ce38/ovsdbserver-sb/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.075054 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56bf55ff4b-kwr2d_59856e90-5b0e-49e1-acf6-882fee38a7ab/placement-api/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.147739 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56bf55ff4b-kwr2d_59856e90-5b0e-49e1-acf6-882fee38a7ab/placement-log/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.378104 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-chspxt_8af32add-4795-4445-be16-96d51882b8ea/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.448161 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a3757ed-9e0c-4d7a-8701-23da7b477f0f/init-config-reloader/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.704949 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a3757ed-9e0c-4d7a-8701-23da7b477f0f/init-config-reloader/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.726597 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a3757ed-9e0c-4d7a-8701-23da7b477f0f/config-reloader/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.760962 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a3757ed-9e0c-4d7a-8701-23da7b477f0f/prometheus/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.833980 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a3757ed-9e0c-4d7a-8701-23da7b477f0f/thanos-sidecar/0.log" Feb 19 15:54:45 crc kubenswrapper[4861]: I0219 15:54:45.970552 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de081c0e-da11-4fbd-a24c-f38f6800df56/setup-container/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.253060 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de081c0e-da11-4fbd-a24c-f38f6800df56/rabbitmq/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.272651 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de081c0e-da11-4fbd-a24c-f38f6800df56/setup-container/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.299693 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1dd55ddf-97da-4b63-9239-1d5c18a70b92/setup-container/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.465905 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1dd55ddf-97da-4b63-9239-1d5c18a70b92/setup-container/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.546103 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-h2jd9_f2663c4e-e513-465a-9758-122b6b2d63fb/reboot-os-openstack-openstack-cell1/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.728089 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-w9gq5_4ec4d380-cfbd-4b5a-a17a-d9815018d088/run-os-openstack-openstack-cell1/0.log" Feb 19 15:54:46 crc kubenswrapper[4861]: I0219 15:54:46.936626 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-hzg8p_1a9e753d-5b08-4abd-bc4e-34abb283079d/ssh-known-hosts-openstack/0.log" Feb 19 15:54:47 crc kubenswrapper[4861]: I0219 15:54:47.136906 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4ffc8498-zcrlh_73ba1cca-8934-4068-8a44-00dc7b5a3726/proxy-server/0.log" Feb 19 15:54:47 crc kubenswrapper[4861]: I0219 15:54:47.332597 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-tz57m_38973f38-cefa-4543-807f-da43a6a21e7b/swift-ring-rebalance/0.log" Feb 19 15:54:47 crc kubenswrapper[4861]: I0219 15:54:47.404056 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4ffc8498-zcrlh_73ba1cca-8934-4068-8a44-00dc7b5a3726/proxy-httpd/0.log" Feb 19 15:54:47 crc kubenswrapper[4861]: I0219 15:54:47.596688 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-k48pq_266f77b9-e649-47f8-8f78-735c0393960f/telemetry-openstack-openstack-cell1/0.log" Feb 19 15:54:47 crc kubenswrapper[4861]: I0219 15:54:47.900583 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-j99pf_6ef1f749-73bc-4049-ba56-e022f58ca9d9/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Feb 19 15:54:47 crc kubenswrapper[4861]: I0219 15:54:47.928967 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-2m2jd_c5cec2fe-8999-48a9-bb3c-7befdb9fd1a0/validate-network-openstack-openstack-cell1/0.log" Feb 19 15:54:49 crc kubenswrapper[4861]: I0219 15:54:49.149553 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1dd55ddf-97da-4b63-9239-1d5c18a70b92/rabbitmq/0.log" Feb 19 15:54:51 crc kubenswrapper[4861]: I0219 15:54:51.607808 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9e3a5bbd-d36b-4e7a-9f66-73d216ab57c4/memcached/0.log" Feb 19 15:54:51 crc kubenswrapper[4861]: I0219 15:54:51.977483 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:54:51 crc kubenswrapper[4861]: E0219 15:54:51.978055 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 15:55:06 crc kubenswrapper[4861]: I0219 15:55:06.977555 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:55:07 crc kubenswrapper[4861]: I0219 15:55:07.913296 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"ff3b544d9a4670710a18d513b5e33301303cc58dc65eea445b0d5802e290a261"} Feb 19 15:55:19 crc kubenswrapper[4861]: I0219 15:55:19.769048 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/util/0.log" Feb 19 15:55:19 crc kubenswrapper[4861]: I0219 15:55:19.950820 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/util/0.log" Feb 19 15:55:19 crc kubenswrapper[4861]: I0219 15:55:19.970648 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/pull/0.log" Feb 19 15:55:19 crc kubenswrapper[4861]: I0219 15:55:19.973640 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/pull/0.log" Feb 19 15:55:20 crc kubenswrapper[4861]: I0219 15:55:20.151083 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/util/0.log" Feb 19 15:55:20 crc kubenswrapper[4861]: I0219 15:55:20.186752 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/pull/0.log" Feb 19 15:55:20 crc kubenswrapper[4861]: I0219 15:55:20.278540 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sc4k7_07e95b41-53d8-4df5-9d1c-f12acaeea9ea/extract/0.log" Feb 19 15:55:21 crc kubenswrapper[4861]: I0219 15:55:21.295959 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-rgfz5_8b9ba3dc-beae-4d3b-8d8d-d595eb7c1ed4/manager/0.log" Feb 19 15:55:21 crc kubenswrapper[4861]: I0219 15:55:21.758500 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-hlnqx_a3459958-b0c6-41f2-afb6-0a9a15ca3837/manager/0.log" Feb 19 15:55:21 crc kubenswrapper[4861]: I0219 15:55:21.887405 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-b24sf_127c1b36-40d1-434a-803b-21cc75d9b41a/manager/0.log" Feb 19 15:55:22 crc kubenswrapper[4861]: I0219 15:55:22.224392 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-g942s_49dd31ac-b688-453a-9701-001ce3063ea7/manager/0.log" Feb 19 15:55:22 crc kubenswrapper[4861]: I0219 15:55:22.352847 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-tx4wr_2e56e0d7-2b43-4c87-912b-e91661077fcf/manager/0.log" Feb 19 15:55:23 crc kubenswrapper[4861]: I0219 15:55:23.298300 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-7lwn6_4401cea1-fce7-4ec1-938b-2519cf2a5521/manager/0.log" Feb 19 15:55:23 crc kubenswrapper[4861]: I0219 15:55:23.778509 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-6q8nw_0bd94e11-4fa6-4d29-89a9-e2a493d94b89/manager/0.log" Feb 19 15:55:23 crc kubenswrapper[4861]: I0219 15:55:23.910294 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-g76jd_98d71c2d-33db-49a7-bb86-918858a91612/manager/0.log" Feb 19 15:55:24 crc kubenswrapper[4861]: I0219 15:55:24.098485 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-xtsr5_4c04516b-4856-4f67-abf9-722af4a25ab6/manager/0.log" Feb 19 15:55:24 crc kubenswrapper[4861]: I0219 15:55:24.199188 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-kp2bg_dacd1beb-af59-4f30-8b76-ef41658bf9f4/manager/0.log" Feb 19 15:55:24 crc kubenswrapper[4861]: I0219 15:55:24.377842 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-x8xxz_4e0aba21-f157-4cdc-8b37-b043ed6298c7/manager/0.log" Feb 19 15:55:24 crc kubenswrapper[4861]: I0219 15:55:24.627414 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-lgnlp_52fdb95f-0a68-4e4b-b205-06b492232999/manager/0.log" Feb 19 15:55:24 crc kubenswrapper[4861]: I0219 15:55:24.632460 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-l5k8g_40169e6a-2e88-4d48-8ca9-8153ae9a109b/manager/0.log" Feb 19 15:55:25 crc kubenswrapper[4861]: I0219 15:55:25.037179 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-tm5fp_c131e628-2531-4d18-8793-894b7b384b43/operator/0.log" Feb 19 15:55:25 crc kubenswrapper[4861]: I0219 15:55:25.446386 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tzrm7_ccbc418a-bf0d-4803-8555-e9d236c68686/registry-server/0.log" Feb 19 15:55:25 crc kubenswrapper[4861]: I0219 15:55:25.587564 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-mlk9q_b62cf279-7b44-4aae-9417-4a9230a62e5e/manager/0.log" Feb 19 15:55:25 crc kubenswrapper[4861]: I0219 15:55:25.688666 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-nmcsj_a9582c92-ce65-4865-bbcf-57b8b3c7002c/manager/0.log" Feb 19 15:55:25 crc kubenswrapper[4861]: I0219 15:55:25.845131 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-qvc2h_cc6aead1-61fb-403f-9388-81c8d84a0588/manager/0.log" Feb 19 15:55:25 crc kubenswrapper[4861]: I0219 15:55:25.903579 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k4k92_430ed7f7-365f-4636-86ce-d257a9203395/operator/0.log" Feb 19 15:55:26 crc kubenswrapper[4861]: I0219 15:55:26.135308 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-8nbvg_2a1a19ba-9308-4f92-97af-210cfbd20e18/manager/0.log" Feb 19 15:55:26 crc kubenswrapper[4861]: I0219 15:55:26.329374 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-f457f_b6fc0d09-ceeb-4f62-8dcd-277cd8f27371/manager/0.log" Feb 19 15:55:26 crc kubenswrapper[4861]: I0219 15:55:26.577390 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-wvd7g_729ddab6-f042-425d-aa39-2d18efc216d6/manager/0.log" Feb 19 15:55:26 crc kubenswrapper[4861]: I0219 15:55:26.634523 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-jn5fv_9e8dc669-82a2-4d0a-bed3-7cb633ed2692/manager/0.log" Feb 19 15:55:28 crc kubenswrapper[4861]: I0219 15:55:28.405784 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-t5mvf_716d511a-dfec-4b60-b963-8cd3f03b6e43/manager/0.log" Feb 19 15:55:28 crc kubenswrapper[4861]: I0219 15:55:28.911639 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-x8bv6_db3d27d2-0a91-4534-b995-3e42bdf891ab/manager/0.log" Feb 19 15:55:50 crc kubenswrapper[4861]: I0219 15:55:50.602129 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5pg7p_a1bd7de0-543b-45cf-8ca8-b647d17671eb/control-plane-machine-set-operator/0.log" Feb 19 15:55:50 crc kubenswrapper[4861]: I0219 15:55:50.786576 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89dlr_435cf100-c5b6-4b1d-80a0-48b7a2688d72/kube-rbac-proxy/0.log" Feb 19 15:55:50 crc kubenswrapper[4861]: I0219 15:55:50.804017 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89dlr_435cf100-c5b6-4b1d-80a0-48b7a2688d72/machine-api-operator/0.log" Feb 19 15:56:06 crc kubenswrapper[4861]: I0219 15:56:06.419971 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-nrprl_a39587a0-7ca9-47eb-8006-1c90e880d712/cert-manager-controller/0.log" Feb 19 15:56:06 crc kubenswrapper[4861]: I0219 15:56:06.602522 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-rz64v_548bbe15-d06a-4ea3-8d65-e0a726a23b06/cert-manager-cainjector/0.log" Feb 19 15:56:06 crc kubenswrapper[4861]: I0219 15:56:06.699836 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-2djr8_cdce8567-cfd8-4edd-b947-963540559559/cert-manager-webhook/0.log" Feb 19 15:56:23 crc kubenswrapper[4861]: I0219 15:56:23.083693 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dpxkn_ebc43836-51c1-432a-89b7-a11307e4e246/nmstate-console-plugin/0.log" Feb 19 15:56:23 crc kubenswrapper[4861]: I0219 15:56:23.697614 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-c9q9b_e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c/kube-rbac-proxy/0.log" Feb 19 15:56:23 crc kubenswrapper[4861]: I0219 15:56:23.720834 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-km6kf_ead34d08-9b3b-4500-b146-907e75d3ae4c/nmstate-handler/0.log" Feb 19 15:56:23 crc kubenswrapper[4861]: I0219 15:56:23.802734 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-c9q9b_e4ae8c99-a3d9-40f2-9c52-67ca6ff8ec9c/nmstate-metrics/0.log" Feb 19 15:56:23 crc kubenswrapper[4861]: I0219 15:56:23.917843 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-gdfgs_722cdce9-f694-4bb3-ac19-7bc8ab5a34a7/nmstate-operator/0.log" Feb 19 15:56:24 crc kubenswrapper[4861]: I0219 15:56:24.027532 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-rz7x7_3a773c5d-b21b-4a8b-b1af-16c2258201d3/nmstate-webhook/0.log" Feb 19 15:56:39 crc kubenswrapper[4861]: I0219 15:56:39.157316 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-kjc7t_381bc7a3-1600-4929-8cac-506015cf9319/prometheus-operator/0.log" Feb 19 15:56:39 crc kubenswrapper[4861]: I0219 15:56:39.304256 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q_626c108e-f677-42ae-a266-0920c5896f3e/prometheus-operator-admission-webhook/0.log" Feb 19 15:56:39 crc kubenswrapper[4861]: I0219 15:56:39.380951 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw_2ce4b85a-81c0-4529-89f6-07363a95082c/prometheus-operator-admission-webhook/0.log" Feb 19 15:56:39 crc kubenswrapper[4861]: I0219 15:56:39.503269 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-46kgq_13c3b5b3-f000-4fce-b836-14a28771110f/operator/0.log" Feb 19 15:56:39 crc kubenswrapper[4861]: I0219 15:56:39.590101 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jts87_0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530/perses-operator/0.log" Feb 19 15:56:54 crc kubenswrapper[4861]: I0219 15:56:54.702619 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-ppsgt_3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a/kube-rbac-proxy/0.log" Feb 19 15:56:54 crc kubenswrapper[4861]: I0219 15:56:54.959020 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-frr-files/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.044799 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-ppsgt_3c7a7e02-cb5b-4ac8-bb6f-cb569822f54a/controller/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.176394 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-frr-files/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.185604 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-metrics/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.214454 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-reloader/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.240499 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-reloader/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.426325 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-reloader/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.432741 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-frr-files/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.465751 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-metrics/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.510333 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-metrics/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.635985 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-reloader/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.650763 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-metrics/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.651262 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/cp-frr-files/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.725600 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/controller/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.865054 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/frr-metrics/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.865464 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/kube-rbac-proxy/0.log" Feb 19 15:56:55 crc kubenswrapper[4861]: I0219 15:56:55.970916 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/kube-rbac-proxy-frr/0.log" Feb 19 15:56:56 crc kubenswrapper[4861]: I0219 15:56:56.136909 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/reloader/0.log" Feb 19 15:56:56 crc kubenswrapper[4861]: I0219 15:56:56.179351 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-lvz2c_346acb4a-b1d2-4ac4-937a-142dc81f5633/frr-k8s-webhook-server/0.log" Feb 19 15:56:56 crc kubenswrapper[4861]: I0219 15:56:56.356719 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7478dc68cb-jcz8t_d342a353-dfb8-4e53-92a9-025e4bfbe49b/manager/0.log" Feb 19 15:56:56 crc kubenswrapper[4861]: I0219 15:56:56.575655 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-859d6bbc66-87l7v_78108700-377c-4f89-807d-ea987304a48f/webhook-server/0.log" Feb 19 15:56:56 crc kubenswrapper[4861]: I0219 15:56:56.626205 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6dkdq_0f25c16e-29a3-4f83-82cf-4c7fc841bff2/kube-rbac-proxy/0.log" Feb 19 15:56:57 crc kubenswrapper[4861]: I0219 15:56:57.613535 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6dkdq_0f25c16e-29a3-4f83-82cf-4c7fc841bff2/speaker/0.log" Feb 19 15:56:59 crc kubenswrapper[4861]: I0219 15:56:59.275358 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kwb5k_a6282f27-6c36-4b95-b3d8-32be4da3efec/frr/0.log" Feb 19 15:57:11 crc kubenswrapper[4861]: I0219 15:57:11.834954 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/util/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.002494 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/util/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.091354 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/pull/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.123253 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/pull/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.326293 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/extract/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.358237 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/util/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.383411 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56qtr8_68b57f3f-9684-4185-bc93-3f7b59ba1c68/pull/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.521996 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/util/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.720381 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/pull/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.727111 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/util/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.734039 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/pull/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.884346 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/util/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.892075 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/extract/0.log" Feb 19 15:57:12 crc kubenswrapper[4861]: I0219 15:57:12.924769 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qml9x_d57799aa-811d-48a0-b770-933ac731596d/pull/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.097254 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/util/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.245780 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/pull/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.248968 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/util/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.287691 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/pull/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.427851 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/util/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.445802 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/pull/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.487411 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213nnjts_f0ffef23-1e8f-4f4c-9402-a69cdce5c4e4/extract/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.601403 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/extract-utilities/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.793952 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/extract-content/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.800670 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/extract-content/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.858883 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/extract-utilities/0.log" Feb 19 15:57:13 crc kubenswrapper[4861]: I0219 15:57:13.987462 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/extract-utilities/0.log" Feb 19 15:57:14 crc kubenswrapper[4861]: I0219 15:57:14.021513 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/extract-content/0.log" Feb 19 15:57:14 crc kubenswrapper[4861]: I0219 15:57:14.219513 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/extract-utilities/0.log" Feb 19 15:57:14 crc kubenswrapper[4861]: I0219 15:57:14.416370 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/extract-content/0.log" Feb 19 15:57:14 crc kubenswrapper[4861]: I0219 15:57:14.424620 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/extract-content/0.log" Feb 19 15:57:14 crc kubenswrapper[4861]: I0219 15:57:14.453351 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/extract-utilities/0.log" Feb 19 15:57:14 crc kubenswrapper[4861]: I0219 15:57:14.697129 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nnrlf_1a9ba32b-3d06-43dd-aa42-af6f400940d4/registry-server/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.054794 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/extract-content/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.066321 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/extract-utilities/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.342053 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/util/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.531574 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/pull/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.588803 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/util/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.605371 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/pull/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.811249 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/util/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.902046 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/extract/0.log" Feb 19 15:57:15 crc kubenswrapper[4861]: I0219 15:57:15.908488 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazqwz7_a45ed361-a230-497f-8a42-60720cbb330b/pull/0.log" Feb 19 15:57:16 crc kubenswrapper[4861]: I0219 15:57:16.052623 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wbs67_39ba35a5-cc11-42a3-ab71-d4744a6d5cf0/marketplace-operator/0.log" Feb 19 15:57:16 crc kubenswrapper[4861]: I0219 15:57:16.130612 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/extract-utilities/0.log" Feb 19 15:57:16 crc kubenswrapper[4861]: I0219 15:57:16.334586 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/extract-utilities/0.log" Feb 19 15:57:16 crc kubenswrapper[4861]: I0219 15:57:16.408815 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/extract-content/0.log" Feb 19 15:57:16 crc kubenswrapper[4861]: I0219 15:57:16.449355 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/extract-content/0.log" Feb 19 15:57:16 crc kubenswrapper[4861]: I0219 15:57:16.501410 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6b6bs_11627588-ac57-42e4-9fc1-e01ecfd7ccd8/registry-server/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.289271 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/extract-content/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.341288 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/extract-utilities/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.361001 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/extract-utilities/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.551057 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/extract-utilities/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.598901 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/extract-content/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.626662 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7kzn2_16e3c2cc-204d-45e5-bef8-5dd819f69a20/registry-server/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.674742 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/extract-content/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.802094 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/extract-utilities/0.log" Feb 19 15:57:17 crc kubenswrapper[4861]: I0219 15:57:17.817895 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/extract-content/0.log" Feb 19 15:57:18 crc kubenswrapper[4861]: I0219 15:57:18.952356 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pvj85_a41fdccb-e760-4654-b1b6-f3f31d71f474/registry-server/0.log" Feb 19 15:57:31 crc kubenswrapper[4861]: I0219 15:57:31.408408 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-kjc7t_381bc7a3-1600-4929-8cac-506015cf9319/prometheus-operator/0.log" Feb 19 15:57:31 crc kubenswrapper[4861]: I0219 15:57:31.433958 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-556cdd4b58-spwxw_2ce4b85a-81c0-4529-89f6-07363a95082c/prometheus-operator-admission-webhook/0.log" Feb 19 15:57:31 crc kubenswrapper[4861]: I0219 15:57:31.474716 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-556cdd4b58-57k9q_626c108e-f677-42ae-a266-0920c5896f3e/prometheus-operator-admission-webhook/0.log" Feb 19 15:57:31 crc kubenswrapper[4861]: I0219 15:57:31.585662 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-46kgq_13c3b5b3-f000-4fce-b836-14a28771110f/operator/0.log" Feb 19 15:57:31 crc kubenswrapper[4861]: I0219 15:57:31.626731 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jts87_0ca60b71-ca8b-4ac1-b2a0-a52bd45fd530/perses-operator/0.log" Feb 19 15:57:33 crc kubenswrapper[4861]: I0219 15:57:33.833701 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:57:33 crc kubenswrapper[4861]: I0219 15:57:33.834280 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.148071 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4x8j7"] Feb 19 15:57:48 crc kubenswrapper[4861]: E0219 15:57:48.149099 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="registry-server" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.149113 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="registry-server" Feb 19 15:57:48 crc kubenswrapper[4861]: E0219 15:57:48.149130 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="extract-utilities" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.149137 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="extract-utilities" Feb 19 15:57:48 crc kubenswrapper[4861]: E0219 15:57:48.149162 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="extract-content" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.149169 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="extract-content" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.149375 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d1d90e-7317-47bf-8e68-afc8718a2318" containerName="registry-server" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.150945 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.183542 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4x8j7"] Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.280540 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjs8v\" (UniqueName: \"kubernetes.io/projected/f9edc7bb-5bf2-4624-8317-f00b67f228f3-kube-api-access-bjs8v\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.280606 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-utilities\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.281233 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-catalog-content\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.383148 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjs8v\" (UniqueName: \"kubernetes.io/projected/f9edc7bb-5bf2-4624-8317-f00b67f228f3-kube-api-access-bjs8v\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.383220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-utilities\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.383413 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-catalog-content\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.383986 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-catalog-content\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.384689 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-utilities\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.404703 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjs8v\" (UniqueName: \"kubernetes.io/projected/f9edc7bb-5bf2-4624-8317-f00b67f228f3-kube-api-access-bjs8v\") pod \"certified-operators-4x8j7\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:48 crc kubenswrapper[4861]: I0219 15:57:48.470046 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:49 crc kubenswrapper[4861]: I0219 15:57:49.086348 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4x8j7"] Feb 19 15:57:49 crc kubenswrapper[4861]: I0219 15:57:49.544342 4861 generic.go:334] "Generic (PLEG): container finished" podID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerID="3dc38f88a5115c5a0813fd771ca32169347253c82359aca67c757a2f5c065395" exitCode=0 Feb 19 15:57:49 crc kubenswrapper[4861]: I0219 15:57:49.544622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerDied","Data":"3dc38f88a5115c5a0813fd771ca32169347253c82359aca67c757a2f5c065395"} Feb 19 15:57:49 crc kubenswrapper[4861]: I0219 15:57:49.544647 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerStarted","Data":"5b4c08375bfc1920d2727090dbc837b4c7c8d82e06a5869ff84f8f62f43f046b"} Feb 19 15:57:49 crc kubenswrapper[4861]: I0219 15:57:49.546602 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:57:50 crc kubenswrapper[4861]: I0219 15:57:50.554128 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerStarted","Data":"111d9e8aa4787d27603c6836a7805467abb20cf401a3f50abfe9a623950df39f"} Feb 19 15:57:52 crc kubenswrapper[4861]: I0219 15:57:52.573728 4861 generic.go:334] "Generic (PLEG): container finished" podID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerID="111d9e8aa4787d27603c6836a7805467abb20cf401a3f50abfe9a623950df39f" exitCode=0 Feb 19 15:57:52 crc kubenswrapper[4861]: I0219 15:57:52.573968 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerDied","Data":"111d9e8aa4787d27603c6836a7805467abb20cf401a3f50abfe9a623950df39f"} Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.604436 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerStarted","Data":"a39697af4e1569f5d3e3f49d727a0e12249a39c8e30eda23c7fe9c6ba1e8b181"} Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.625333 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4x8j7" podStartSLOduration=2.21779666 podStartE2EDuration="5.625313021s" podCreationTimestamp="2026-02-19 15:57:48 +0000 UTC" firstStartedPulling="2026-02-19 15:57:49.546390368 +0000 UTC m=+10084.207493596" lastFinishedPulling="2026-02-19 15:57:52.953906729 +0000 UTC m=+10087.615009957" observedRunningTime="2026-02-19 15:57:53.622636619 +0000 UTC m=+10088.283739857" watchObservedRunningTime="2026-02-19 15:57:53.625313021 +0000 UTC m=+10088.286416249" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.745524 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gspm5"] Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.747875 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.784837 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gspm5"] Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.830904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn5lb\" (UniqueName: \"kubernetes.io/projected/848b9fe4-e59b-4613-aea0-d802277986ea-kube-api-access-jn5lb\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.831079 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-catalog-content\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.831199 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-utilities\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.932932 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-catalog-content\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.933455 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-utilities\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.933477 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5lb\" (UniqueName: \"kubernetes.io/projected/848b9fe4-e59b-4613-aea0-d802277986ea-kube-api-access-jn5lb\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.934219 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-catalog-content\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.934259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-utilities\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:53 crc kubenswrapper[4861]: I0219 15:57:53.961932 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn5lb\" (UniqueName: \"kubernetes.io/projected/848b9fe4-e59b-4613-aea0-d802277986ea-kube-api-access-jn5lb\") pod \"redhat-marketplace-gspm5\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:54 crc kubenswrapper[4861]: I0219 15:57:54.084439 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:57:54 crc kubenswrapper[4861]: I0219 15:57:54.661278 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gspm5"] Feb 19 15:57:55 crc kubenswrapper[4861]: I0219 15:57:55.624343 4861 generic.go:334] "Generic (PLEG): container finished" podID="848b9fe4-e59b-4613-aea0-d802277986ea" containerID="a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc" exitCode=0 Feb 19 15:57:55 crc kubenswrapper[4861]: I0219 15:57:55.624600 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerDied","Data":"a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc"} Feb 19 15:57:55 crc kubenswrapper[4861]: I0219 15:57:55.624788 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerStarted","Data":"17f44bf5bdde902b1302515f127e3171a56ea0c546fc55fb3cea44d62d8580db"} Feb 19 15:57:56 crc kubenswrapper[4861]: I0219 15:57:56.639855 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerStarted","Data":"4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61"} Feb 19 15:57:58 crc kubenswrapper[4861]: I0219 15:57:58.471546 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:58 crc kubenswrapper[4861]: I0219 15:57:58.472072 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:58 crc kubenswrapper[4861]: I0219 15:57:58.526740 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:58 crc kubenswrapper[4861]: I0219 15:57:58.663276 4861 generic.go:334] "Generic (PLEG): container finished" podID="848b9fe4-e59b-4613-aea0-d802277986ea" containerID="4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61" exitCode=0 Feb 19 15:57:58 crc kubenswrapper[4861]: I0219 15:57:58.663348 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerDied","Data":"4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61"} Feb 19 15:57:58 crc kubenswrapper[4861]: I0219 15:57:58.734737 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:57:59 crc kubenswrapper[4861]: I0219 15:57:59.681154 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerStarted","Data":"a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f"} Feb 19 15:57:59 crc kubenswrapper[4861]: I0219 15:57:59.708317 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gspm5" podStartSLOduration=3.232873706 podStartE2EDuration="6.708295593s" podCreationTimestamp="2026-02-19 15:57:53 +0000 UTC" firstStartedPulling="2026-02-19 15:57:55.626606186 +0000 UTC m=+10090.287709414" lastFinishedPulling="2026-02-19 15:57:59.102028073 +0000 UTC m=+10093.763131301" observedRunningTime="2026-02-19 15:57:59.698649724 +0000 UTC m=+10094.359752962" watchObservedRunningTime="2026-02-19 15:57:59.708295593 +0000 UTC m=+10094.369398831" Feb 19 15:58:00 crc kubenswrapper[4861]: I0219 15:58:00.717906 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4x8j7"] Feb 19 15:58:00 crc kubenswrapper[4861]: I0219 15:58:00.718127 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4x8j7" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="registry-server" containerID="cri-o://a39697af4e1569f5d3e3f49d727a0e12249a39c8e30eda23c7fe9c6ba1e8b181" gracePeriod=2 Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.702616 4861 generic.go:334] "Generic (PLEG): container finished" podID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerID="a39697af4e1569f5d3e3f49d727a0e12249a39c8e30eda23c7fe9c6ba1e8b181" exitCode=0 Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.703039 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerDied","Data":"a39697af4e1569f5d3e3f49d727a0e12249a39c8e30eda23c7fe9c6ba1e8b181"} Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.703352 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x8j7" event={"ID":"f9edc7bb-5bf2-4624-8317-f00b67f228f3","Type":"ContainerDied","Data":"5b4c08375bfc1920d2727090dbc837b4c7c8d82e06a5869ff84f8f62f43f046b"} Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.703366 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b4c08375bfc1920d2727090dbc837b4c7c8d82e06a5869ff84f8f62f43f046b" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.787715 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.853371 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-utilities\") pod \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.853449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-catalog-content\") pod \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.853562 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjs8v\" (UniqueName: \"kubernetes.io/projected/f9edc7bb-5bf2-4624-8317-f00b67f228f3-kube-api-access-bjs8v\") pod \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\" (UID: \"f9edc7bb-5bf2-4624-8317-f00b67f228f3\") " Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.855043 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-utilities" (OuterVolumeSpecName: "utilities") pod "f9edc7bb-5bf2-4624-8317-f00b67f228f3" (UID: "f9edc7bb-5bf2-4624-8317-f00b67f228f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.860676 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.862691 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9edc7bb-5bf2-4624-8317-f00b67f228f3-kube-api-access-bjs8v" (OuterVolumeSpecName: "kube-api-access-bjs8v") pod "f9edc7bb-5bf2-4624-8317-f00b67f228f3" (UID: "f9edc7bb-5bf2-4624-8317-f00b67f228f3"). InnerVolumeSpecName "kube-api-access-bjs8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.917483 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9edc7bb-5bf2-4624-8317-f00b67f228f3" (UID: "f9edc7bb-5bf2-4624-8317-f00b67f228f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.964207 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9edc7bb-5bf2-4624-8317-f00b67f228f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:58:01 crc kubenswrapper[4861]: I0219 15:58:01.964269 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjs8v\" (UniqueName: \"kubernetes.io/projected/f9edc7bb-5bf2-4624-8317-f00b67f228f3-kube-api-access-bjs8v\") on node \"crc\" DevicePath \"\"" Feb 19 15:58:02 crc kubenswrapper[4861]: I0219 15:58:02.713694 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x8j7" Feb 19 15:58:02 crc kubenswrapper[4861]: I0219 15:58:02.758238 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4x8j7"] Feb 19 15:58:02 crc kubenswrapper[4861]: I0219 15:58:02.777550 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4x8j7"] Feb 19 15:58:03 crc kubenswrapper[4861]: I0219 15:58:03.834226 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:58:03 crc kubenswrapper[4861]: I0219 15:58:03.834788 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:58:04 crc kubenswrapper[4861]: I0219 15:58:04.006647 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" path="/var/lib/kubelet/pods/f9edc7bb-5bf2-4624-8317-f00b67f228f3/volumes" Feb 19 15:58:04 crc kubenswrapper[4861]: I0219 15:58:04.085545 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:58:04 crc kubenswrapper[4861]: I0219 15:58:04.085595 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:58:04 crc kubenswrapper[4861]: I0219 15:58:04.170654 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:58:04 crc kubenswrapper[4861]: I0219 15:58:04.883686 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:58:05 crc kubenswrapper[4861]: I0219 15:58:05.923694 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gspm5"] Feb 19 15:58:06 crc kubenswrapper[4861]: I0219 15:58:06.769688 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gspm5" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="registry-server" containerID="cri-o://a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f" gracePeriod=2 Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.418816 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.503762 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-utilities\") pod \"848b9fe4-e59b-4613-aea0-d802277986ea\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.503896 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-catalog-content\") pod \"848b9fe4-e59b-4613-aea0-d802277986ea\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.504219 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn5lb\" (UniqueName: \"kubernetes.io/projected/848b9fe4-e59b-4613-aea0-d802277986ea-kube-api-access-jn5lb\") pod \"848b9fe4-e59b-4613-aea0-d802277986ea\" (UID: \"848b9fe4-e59b-4613-aea0-d802277986ea\") " Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.504871 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-utilities" (OuterVolumeSpecName: "utilities") pod "848b9fe4-e59b-4613-aea0-d802277986ea" (UID: "848b9fe4-e59b-4613-aea0-d802277986ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.505461 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.511866 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848b9fe4-e59b-4613-aea0-d802277986ea-kube-api-access-jn5lb" (OuterVolumeSpecName: "kube-api-access-jn5lb") pod "848b9fe4-e59b-4613-aea0-d802277986ea" (UID: "848b9fe4-e59b-4613-aea0-d802277986ea"). InnerVolumeSpecName "kube-api-access-jn5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.526560 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "848b9fe4-e59b-4613-aea0-d802277986ea" (UID: "848b9fe4-e59b-4613-aea0-d802277986ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.607821 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn5lb\" (UniqueName: \"kubernetes.io/projected/848b9fe4-e59b-4613-aea0-d802277986ea-kube-api-access-jn5lb\") on node \"crc\" DevicePath \"\"" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.607867 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848b9fe4-e59b-4613-aea0-d802277986ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.798388 4861 generic.go:334] "Generic (PLEG): container finished" podID="848b9fe4-e59b-4613-aea0-d802277986ea" containerID="a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f" exitCode=0 Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.798502 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerDied","Data":"a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f"} Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.798556 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gspm5" event={"ID":"848b9fe4-e59b-4613-aea0-d802277986ea","Type":"ContainerDied","Data":"17f44bf5bdde902b1302515f127e3171a56ea0c546fc55fb3cea44d62d8580db"} Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.798594 4861 scope.go:117] "RemoveContainer" containerID="a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.798865 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gspm5" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.841209 4861 scope.go:117] "RemoveContainer" containerID="4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.865968 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gspm5"] Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.866557 4861 scope.go:117] "RemoveContainer" containerID="a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.877915 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gspm5"] Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.916472 4861 scope.go:117] "RemoveContainer" containerID="a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f" Feb 19 15:58:07 crc kubenswrapper[4861]: E0219 15:58:07.916916 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f\": container with ID starting with a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f not found: ID does not exist" containerID="a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.916954 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f"} err="failed to get container status \"a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f\": rpc error: code = NotFound desc = could not find container \"a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f\": container with ID starting with a77b220e08075325666fa8aeec6779dfe04cf307b2ee2f44a3f91e8cf3106a8f not found: ID does not exist" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.916979 4861 scope.go:117] "RemoveContainer" containerID="4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61" Feb 19 15:58:07 crc kubenswrapper[4861]: E0219 15:58:07.917246 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61\": container with ID starting with 4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61 not found: ID does not exist" containerID="4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.917273 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61"} err="failed to get container status \"4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61\": rpc error: code = NotFound desc = could not find container \"4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61\": container with ID starting with 4416d3f3b913a07c017f969aa371485a2ecfa88fa9df057f4996caeedea48b61 not found: ID does not exist" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.917289 4861 scope.go:117] "RemoveContainer" containerID="a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc" Feb 19 15:58:07 crc kubenswrapper[4861]: E0219 15:58:07.917515 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc\": container with ID starting with a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc not found: ID does not exist" containerID="a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.917543 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc"} err="failed to get container status \"a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc\": rpc error: code = NotFound desc = could not find container \"a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc\": container with ID starting with a5371ffde8cd9f52718ea2bf3a26ec897b7cc7e2e2568aa5c81b5a036c9d02dc not found: ID does not exist" Feb 19 15:58:07 crc kubenswrapper[4861]: I0219 15:58:07.989801 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" path="/var/lib/kubelet/pods/848b9fe4-e59b-4613-aea0-d802277986ea/volumes" Feb 19 15:58:14 crc kubenswrapper[4861]: I0219 15:58:14.078506 4861 scope.go:117] "RemoveContainer" containerID="a27df46db6d2bd09c11629b60654b6308b372ba417b9783ae7bbc8e38f3a6214" Feb 19 15:58:14 crc kubenswrapper[4861]: I0219 15:58:14.122361 4861 scope.go:117] "RemoveContainer" containerID="e03845ddcfaefeb62c017bf91c795aea48c93ae7781241b20ee82bf865130c65" Feb 19 15:58:14 crc kubenswrapper[4861]: I0219 15:58:14.147301 4861 scope.go:117] "RemoveContainer" containerID="f6c416109adb35b7a6797ce1f08037c1849e56f803dee65dd14b8cdc552d3ecd" Feb 19 15:58:33 crc kubenswrapper[4861]: I0219 15:58:33.839793 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:58:33 crc kubenswrapper[4861]: I0219 15:58:33.841085 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:58:33 crc kubenswrapper[4861]: I0219 15:58:33.841286 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 15:58:33 crc kubenswrapper[4861]: I0219 15:58:33.843951 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff3b544d9a4670710a18d513b5e33301303cc58dc65eea445b0d5802e290a261"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:58:33 crc kubenswrapper[4861]: I0219 15:58:33.844324 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://ff3b544d9a4670710a18d513b5e33301303cc58dc65eea445b0d5802e290a261" gracePeriod=600 Feb 19 15:58:34 crc kubenswrapper[4861]: I0219 15:58:34.249533 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="ff3b544d9a4670710a18d513b5e33301303cc58dc65eea445b0d5802e290a261" exitCode=0 Feb 19 15:58:34 crc kubenswrapper[4861]: I0219 15:58:34.249800 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"ff3b544d9a4670710a18d513b5e33301303cc58dc65eea445b0d5802e290a261"} Feb 19 15:58:34 crc kubenswrapper[4861]: I0219 15:58:34.249836 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerStarted","Data":"30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853"} Feb 19 15:58:34 crc kubenswrapper[4861]: I0219 15:58:34.249877 4861 scope.go:117] "RemoveContainer" containerID="471c4d8441c74ce54c76fc8b9b621f040e399ab162f99d65bfb0cebea9487613" Feb 19 15:59:43 crc kubenswrapper[4861]: I0219 15:59:43.291232 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerID="e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7" exitCode=0 Feb 19 15:59:43 crc kubenswrapper[4861]: I0219 15:59:43.291833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-74ltx/must-gather-sfmrm" event={"ID":"8f3e4835-beac-4e72-bfc0-428f7163bd7a","Type":"ContainerDied","Data":"e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7"} Feb 19 15:59:43 crc kubenswrapper[4861]: I0219 15:59:43.292597 4861 scope.go:117] "RemoveContainer" containerID="e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7" Feb 19 15:59:44 crc kubenswrapper[4861]: I0219 15:59:44.106083 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-74ltx_must-gather-sfmrm_8f3e4835-beac-4e72-bfc0-428f7163bd7a/gather/0.log" Feb 19 15:59:52 crc kubenswrapper[4861]: I0219 15:59:52.772719 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-74ltx/must-gather-sfmrm"] Feb 19 15:59:52 crc kubenswrapper[4861]: I0219 15:59:52.773286 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-74ltx/must-gather-sfmrm"] Feb 19 15:59:52 crc kubenswrapper[4861]: I0219 15:59:52.773482 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-74ltx/must-gather-sfmrm" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="copy" containerID="cri-o://f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8" gracePeriod=2 Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.242151 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-74ltx_must-gather-sfmrm_8f3e4835-beac-4e72-bfc0-428f7163bd7a/copy/0.log" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.243644 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.365731 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrlv7\" (UniqueName: \"kubernetes.io/projected/8f3e4835-beac-4e72-bfc0-428f7163bd7a-kube-api-access-mrlv7\") pod \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.366257 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f3e4835-beac-4e72-bfc0-428f7163bd7a-must-gather-output\") pod \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\" (UID: \"8f3e4835-beac-4e72-bfc0-428f7163bd7a\") " Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.384913 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3e4835-beac-4e72-bfc0-428f7163bd7a-kube-api-access-mrlv7" (OuterVolumeSpecName: "kube-api-access-mrlv7") pod "8f3e4835-beac-4e72-bfc0-428f7163bd7a" (UID: "8f3e4835-beac-4e72-bfc0-428f7163bd7a"). InnerVolumeSpecName "kube-api-access-mrlv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.414797 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-74ltx_must-gather-sfmrm_8f3e4835-beac-4e72-bfc0-428f7163bd7a/copy/0.log" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.415616 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerID="f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8" exitCode=143 Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.415682 4861 scope.go:117] "RemoveContainer" containerID="f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.416018 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-74ltx/must-gather-sfmrm" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.444281 4861 scope.go:117] "RemoveContainer" containerID="e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.472138 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrlv7\" (UniqueName: \"kubernetes.io/projected/8f3e4835-beac-4e72-bfc0-428f7163bd7a-kube-api-access-mrlv7\") on node \"crc\" DevicePath \"\"" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.507410 4861 scope.go:117] "RemoveContainer" containerID="f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8" Feb 19 15:59:53 crc kubenswrapper[4861]: E0219 15:59:53.509361 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8\": container with ID starting with f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8 not found: ID does not exist" containerID="f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.509484 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8"} err="failed to get container status \"f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8\": rpc error: code = NotFound desc = could not find container \"f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8\": container with ID starting with f9ca194f604f0569885859762175b5514fe27c59e21378494350c1ba46a5fcf8 not found: ID does not exist" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.509512 4861 scope.go:117] "RemoveContainer" containerID="e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7" Feb 19 15:59:53 crc kubenswrapper[4861]: E0219 15:59:53.510035 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7\": container with ID starting with e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7 not found: ID does not exist" containerID="e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.510058 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7"} err="failed to get container status \"e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7\": rpc error: code = NotFound desc = could not find container \"e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7\": container with ID starting with e02bb7b87de4ef80dca115710ea6ad4d2cd660904463638735051ff1af3765b7 not found: ID does not exist" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.581667 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f3e4835-beac-4e72-bfc0-428f7163bd7a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8f3e4835-beac-4e72-bfc0-428f7163bd7a" (UID: "8f3e4835-beac-4e72-bfc0-428f7163bd7a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.676026 4861 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f3e4835-beac-4e72-bfc0-428f7163bd7a-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 15:59:53 crc kubenswrapper[4861]: I0219 15:59:53.991230 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" path="/var/lib/kubelet/pods/8f3e4835-beac-4e72-bfc0-428f7163bd7a/volumes" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.178829 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw"] Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.179992 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="extract-utilities" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180010 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="extract-utilities" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180038 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="registry-server" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180046 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="registry-server" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180060 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="copy" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180069 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="copy" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180084 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="registry-server" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180092 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="registry-server" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180101 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="gather" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180109 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="gather" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180128 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="extract-content" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180137 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="extract-content" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180175 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="extract-content" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180183 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="extract-content" Feb 19 16:00:00 crc kubenswrapper[4861]: E0219 16:00:00.180196 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="extract-utilities" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180205 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="extract-utilities" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180480 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="gather" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180499 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="848b9fe4-e59b-4613-aea0-d802277986ea" containerName="registry-server" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180531 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3e4835-beac-4e72-bfc0-428f7163bd7a" containerName="copy" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.180542 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9edc7bb-5bf2-4624-8317-f00b67f228f3" containerName="registry-server" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.181451 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.184348 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.185528 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.216071 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw"] Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.220372 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkdr\" (UniqueName: \"kubernetes.io/projected/a32adf01-e064-4556-a702-5fcdce14ce5b-kube-api-access-wxkdr\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.220450 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a32adf01-e064-4556-a702-5fcdce14ce5b-config-volume\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.220575 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a32adf01-e064-4556-a702-5fcdce14ce5b-secret-volume\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.322982 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkdr\" (UniqueName: \"kubernetes.io/projected/a32adf01-e064-4556-a702-5fcdce14ce5b-kube-api-access-wxkdr\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.323030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a32adf01-e064-4556-a702-5fcdce14ce5b-config-volume\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.323059 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a32adf01-e064-4556-a702-5fcdce14ce5b-secret-volume\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.324677 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a32adf01-e064-4556-a702-5fcdce14ce5b-config-volume\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.331531 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a32adf01-e064-4556-a702-5fcdce14ce5b-secret-volume\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.341005 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkdr\" (UniqueName: \"kubernetes.io/projected/a32adf01-e064-4556-a702-5fcdce14ce5b-kube-api-access-wxkdr\") pod \"collect-profiles-29525280-bpgdw\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:00 crc kubenswrapper[4861]: I0219 16:00:00.508360 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:01 crc kubenswrapper[4861]: I0219 16:00:01.036805 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw"] Feb 19 16:00:01 crc kubenswrapper[4861]: I0219 16:00:01.559907 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" event={"ID":"a32adf01-e064-4556-a702-5fcdce14ce5b","Type":"ContainerStarted","Data":"8e1c8626fb33a98c55504db8e0eab056eee88a2d42034ad9e70411753889c669"} Feb 19 16:00:01 crc kubenswrapper[4861]: I0219 16:00:01.560258 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" event={"ID":"a32adf01-e064-4556-a702-5fcdce14ce5b","Type":"ContainerStarted","Data":"c6248ce9285e9db052d3cfd8fec2907692ecb06037eae885c3137e870d61dda6"} Feb 19 16:00:01 crc kubenswrapper[4861]: I0219 16:00:01.578650 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" podStartSLOduration=1.5786353549999999 podStartE2EDuration="1.578635355s" podCreationTimestamp="2026-02-19 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:00:01.573114057 +0000 UTC m=+10216.234217285" watchObservedRunningTime="2026-02-19 16:00:01.578635355 +0000 UTC m=+10216.239738573" Feb 19 16:00:02 crc kubenswrapper[4861]: I0219 16:00:02.573873 4861 generic.go:334] "Generic (PLEG): container finished" podID="a32adf01-e064-4556-a702-5fcdce14ce5b" containerID="8e1c8626fb33a98c55504db8e0eab056eee88a2d42034ad9e70411753889c669" exitCode=0 Feb 19 16:00:02 crc kubenswrapper[4861]: I0219 16:00:02.574024 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" event={"ID":"a32adf01-e064-4556-a702-5fcdce14ce5b","Type":"ContainerDied","Data":"8e1c8626fb33a98c55504db8e0eab056eee88a2d42034ad9e70411753889c669"} Feb 19 16:00:03 crc kubenswrapper[4861]: I0219 16:00:03.986231 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.135055 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkdr\" (UniqueName: \"kubernetes.io/projected/a32adf01-e064-4556-a702-5fcdce14ce5b-kube-api-access-wxkdr\") pod \"a32adf01-e064-4556-a702-5fcdce14ce5b\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.135289 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a32adf01-e064-4556-a702-5fcdce14ce5b-config-volume\") pod \"a32adf01-e064-4556-a702-5fcdce14ce5b\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.135337 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a32adf01-e064-4556-a702-5fcdce14ce5b-secret-volume\") pod \"a32adf01-e064-4556-a702-5fcdce14ce5b\" (UID: \"a32adf01-e064-4556-a702-5fcdce14ce5b\") " Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.136396 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a32adf01-e064-4556-a702-5fcdce14ce5b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a32adf01-e064-4556-a702-5fcdce14ce5b" (UID: "a32adf01-e064-4556-a702-5fcdce14ce5b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.142533 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32adf01-e064-4556-a702-5fcdce14ce5b-kube-api-access-wxkdr" (OuterVolumeSpecName: "kube-api-access-wxkdr") pod "a32adf01-e064-4556-a702-5fcdce14ce5b" (UID: "a32adf01-e064-4556-a702-5fcdce14ce5b"). InnerVolumeSpecName "kube-api-access-wxkdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.147772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32adf01-e064-4556-a702-5fcdce14ce5b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a32adf01-e064-4556-a702-5fcdce14ce5b" (UID: "a32adf01-e064-4556-a702-5fcdce14ce5b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.238160 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a32adf01-e064-4556-a702-5fcdce14ce5b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.238399 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a32adf01-e064-4556-a702-5fcdce14ce5b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.238412 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkdr\" (UniqueName: \"kubernetes.io/projected/a32adf01-e064-4556-a702-5fcdce14ce5b-kube-api-access-wxkdr\") on node \"crc\" DevicePath \"\"" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.598705 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" event={"ID":"a32adf01-e064-4556-a702-5fcdce14ce5b","Type":"ContainerDied","Data":"c6248ce9285e9db052d3cfd8fec2907692ecb06037eae885c3137e870d61dda6"} Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.598759 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6248ce9285e9db052d3cfd8fec2907692ecb06037eae885c3137e870d61dda6" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.598790 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-bpgdw" Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.650692 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87"] Feb 19 16:00:04 crc kubenswrapper[4861]: I0219 16:00:04.659148 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-rpb87"] Feb 19 16:00:06 crc kubenswrapper[4861]: I0219 16:00:06.070854 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c574eb-3877-43f5-a485-a8a9c6923aa8" path="/var/lib/kubelet/pods/a2c574eb-3877-43f5-a485-a8a9c6923aa8/volumes" Feb 19 16:00:14 crc kubenswrapper[4861]: I0219 16:00:14.311358 4861 scope.go:117] "RemoveContainer" containerID="11c23f2c0c13df5d4d43abcc227da089e5b62a4a1f05356c2ee1779ae5f43f33" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.158020 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525281-fmwdk"] Feb 19 16:01:00 crc kubenswrapper[4861]: E0219 16:01:00.159307 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32adf01-e064-4556-a702-5fcdce14ce5b" containerName="collect-profiles" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.159330 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32adf01-e064-4556-a702-5fcdce14ce5b" containerName="collect-profiles" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.159777 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32adf01-e064-4556-a702-5fcdce14ce5b" containerName="collect-profiles" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.160979 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.170231 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525281-fmwdk"] Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.200026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-config-data\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.200368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gk5s\" (UniqueName: \"kubernetes.io/projected/65360b64-76a9-464c-9b72-34a5e8e431a1-kube-api-access-4gk5s\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.200896 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-combined-ca-bundle\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.201126 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-fernet-keys\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.304404 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-config-data\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.304707 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gk5s\" (UniqueName: \"kubernetes.io/projected/65360b64-76a9-464c-9b72-34a5e8e431a1-kube-api-access-4gk5s\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.304940 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-combined-ca-bundle\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.305013 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-fernet-keys\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.310796 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-combined-ca-bundle\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.310957 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-fernet-keys\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.320998 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-config-data\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.321289 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gk5s\" (UniqueName: \"kubernetes.io/projected/65360b64-76a9-464c-9b72-34a5e8e431a1-kube-api-access-4gk5s\") pod \"keystone-cron-29525281-fmwdk\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.496802 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:00 crc kubenswrapper[4861]: I0219 16:01:00.948268 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525281-fmwdk"] Feb 19 16:01:01 crc kubenswrapper[4861]: I0219 16:01:01.257136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-fmwdk" event={"ID":"65360b64-76a9-464c-9b72-34a5e8e431a1","Type":"ContainerStarted","Data":"5f2dc8bbdfc8a4b66a4ad2cbf8299d7d882388a83aacf3c375807387f91740da"} Feb 19 16:01:01 crc kubenswrapper[4861]: I0219 16:01:01.257184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-fmwdk" event={"ID":"65360b64-76a9-464c-9b72-34a5e8e431a1","Type":"ContainerStarted","Data":"31934342987cc7f379f798eb7f6e15151348d3a9ec0de5d11234f183f2787901"} Feb 19 16:01:01 crc kubenswrapper[4861]: I0219 16:01:01.289234 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525281-fmwdk" podStartSLOduration=1.289212865 podStartE2EDuration="1.289212865s" podCreationTimestamp="2026-02-19 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:01:01.280584922 +0000 UTC m=+10275.941688180" watchObservedRunningTime="2026-02-19 16:01:01.289212865 +0000 UTC m=+10275.950316103" Feb 19 16:01:03 crc kubenswrapper[4861]: I0219 16:01:03.833823 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:01:03 crc kubenswrapper[4861]: I0219 16:01:03.834390 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:01:04 crc kubenswrapper[4861]: I0219 16:01:04.292128 4861 generic.go:334] "Generic (PLEG): container finished" podID="65360b64-76a9-464c-9b72-34a5e8e431a1" containerID="5f2dc8bbdfc8a4b66a4ad2cbf8299d7d882388a83aacf3c375807387f91740da" exitCode=0 Feb 19 16:01:04 crc kubenswrapper[4861]: I0219 16:01:04.292167 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-fmwdk" event={"ID":"65360b64-76a9-464c-9b72-34a5e8e431a1","Type":"ContainerDied","Data":"5f2dc8bbdfc8a4b66a4ad2cbf8299d7d882388a83aacf3c375807387f91740da"} Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.720452 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.925251 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-combined-ca-bundle\") pod \"65360b64-76a9-464c-9b72-34a5e8e431a1\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.925994 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gk5s\" (UniqueName: \"kubernetes.io/projected/65360b64-76a9-464c-9b72-34a5e8e431a1-kube-api-access-4gk5s\") pod \"65360b64-76a9-464c-9b72-34a5e8e431a1\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.926031 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-fernet-keys\") pod \"65360b64-76a9-464c-9b72-34a5e8e431a1\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.926140 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-config-data\") pod \"65360b64-76a9-464c-9b72-34a5e8e431a1\" (UID: \"65360b64-76a9-464c-9b72-34a5e8e431a1\") " Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.930169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "65360b64-76a9-464c-9b72-34a5e8e431a1" (UID: "65360b64-76a9-464c-9b72-34a5e8e431a1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.932283 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65360b64-76a9-464c-9b72-34a5e8e431a1-kube-api-access-4gk5s" (OuterVolumeSpecName: "kube-api-access-4gk5s") pod "65360b64-76a9-464c-9b72-34a5e8e431a1" (UID: "65360b64-76a9-464c-9b72-34a5e8e431a1"). InnerVolumeSpecName "kube-api-access-4gk5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.955216 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65360b64-76a9-464c-9b72-34a5e8e431a1" (UID: "65360b64-76a9-464c-9b72-34a5e8e431a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4861]: I0219 16:01:05.993984 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-config-data" (OuterVolumeSpecName: "config-data") pod "65360b64-76a9-464c-9b72-34a5e8e431a1" (UID: "65360b64-76a9-464c-9b72-34a5e8e431a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.030396 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gk5s\" (UniqueName: \"kubernetes.io/projected/65360b64-76a9-464c-9b72-34a5e8e431a1-kube-api-access-4gk5s\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.030477 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.032175 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.032210 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65360b64-76a9-464c-9b72-34a5e8e431a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.312143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-fmwdk" event={"ID":"65360b64-76a9-464c-9b72-34a5e8e431a1","Type":"ContainerDied","Data":"31934342987cc7f379f798eb7f6e15151348d3a9ec0de5d11234f183f2787901"} Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.312397 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-fmwdk" Feb 19 16:01:06 crc kubenswrapper[4861]: I0219 16:01:06.312405 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31934342987cc7f379f798eb7f6e15151348d3a9ec0de5d11234f183f2787901" Feb 19 16:01:33 crc kubenswrapper[4861]: I0219 16:01:33.834196 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:01:33 crc kubenswrapper[4861]: I0219 16:01:33.835177 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.834058 4861 patch_prober.go:28] interesting pod/machine-config-daemon-lwqpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.834794 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.834887 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.837709 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853"} pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.838015 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" containerName="machine-config-daemon" containerID="cri-o://30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853" gracePeriod=600 Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.996674 4861 generic.go:334] "Generic (PLEG): container finished" podID="478e6971-05ac-43f2-99a2-cd93644c6227" containerID="30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853" exitCode=0 Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.996713 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" event={"ID":"478e6971-05ac-43f2-99a2-cd93644c6227","Type":"ContainerDied","Data":"30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853"} Feb 19 16:02:03 crc kubenswrapper[4861]: I0219 16:02:03.996999 4861 scope.go:117] "RemoveContainer" containerID="ff3b544d9a4670710a18d513b5e33301303cc58dc65eea445b0d5802e290a261" Feb 19 16:02:03 crc kubenswrapper[4861]: E0219 16:02:03.998320 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 16:02:05 crc kubenswrapper[4861]: I0219 16:02:05.017356 4861 scope.go:117] "RemoveContainer" containerID="30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853" Feb 19 16:02:05 crc kubenswrapper[4861]: E0219 16:02:05.018544 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227" Feb 19 16:02:18 crc kubenswrapper[4861]: I0219 16:02:18.977503 4861 scope.go:117] "RemoveContainer" containerID="30c7d2f2c88e3b897b976c883b369ac464966761eca949f7fe3fe59de38d3853" Feb 19 16:02:18 crc kubenswrapper[4861]: E0219 16:02:18.978519 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lwqpq_openshift-machine-config-operator(478e6971-05ac-43f2-99a2-cd93644c6227)\"" pod="openshift-machine-config-operator/machine-config-daemon-lwqpq" podUID="478e6971-05ac-43f2-99a2-cd93644c6227"